00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1996 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3257 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.046 Using shallow fetch with depth 1 00:00:00.046 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.046 > git --version # timeout=10 00:00:00.059 > git --version # 'git version 2.39.2' 00:00:00.059 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.078 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.078 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.576 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.590 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.603 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:02.603 > git config core.sparsecheckout # timeout=10 00:00:02.616 > git read-tree -mu HEAD # timeout=10 00:00:02.632 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:02.651 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:02.651 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:02.836 [Pipeline] Start of Pipeline 00:00:02.853 [Pipeline] library 00:00:02.855 Loading library shm_lib@master 00:00:02.855 Library shm_lib@master is cached. Copying from home. 00:00:02.872 [Pipeline] node 00:00:02.881 Running on VM-host-WFP7 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:02.883 [Pipeline] { 00:00:02.894 [Pipeline] catchError 00:00:02.899 [Pipeline] { 00:00:02.910 [Pipeline] wrap 00:00:02.917 [Pipeline] { 00:00:02.923 [Pipeline] stage 00:00:02.925 [Pipeline] { (Prologue) 00:00:02.939 [Pipeline] echo 00:00:02.940 Node: VM-host-WFP7 00:00:02.946 [Pipeline] cleanWs 00:00:02.953 [WS-CLEANUP] Deleting project workspace... 00:00:02.953 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.959 [WS-CLEANUP] done 00:00:03.129 [Pipeline] setCustomBuildProperty 00:00:03.254 [Pipeline] httpRequest 00:00:03.275 [Pipeline] echo 00:00:03.277 Sorcerer 10.211.164.101 is alive 00:00:03.284 [Pipeline] httpRequest 00:00:03.287 HttpMethod: GET 00:00:03.287 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:03.288 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:03.289 Response Code: HTTP/1.1 200 OK 00:00:03.289 Success: Status code 200 is in the accepted range: 200,404 00:00:03.289 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:03.432 [Pipeline] sh 00:00:03.706 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:03.721 [Pipeline] httpRequest 00:00:03.755 [Pipeline] echo 00:00:03.757 Sorcerer 10.211.164.101 is alive 00:00:03.779 [Pipeline] httpRequest 00:00:03.784 HttpMethod: GET 00:00:03.784 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:03.784 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:03.786 Response Code: HTTP/1.1 200 OK 00:00:03.786 Success: Status code 200 is in the accepted range: 200,404 00:00:03.786 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:17.558 [Pipeline] sh 00:00:17.844 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:20.383 [Pipeline] sh 00:00:20.666 + git -C spdk log --oneline -n5 00:00:20.666 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:20.666 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:20.666 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:20.666 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:20.666 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:20.688 [Pipeline] writeFile 00:00:20.706 [Pipeline] sh 00:00:20.991 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:21.005 [Pipeline] sh 00:00:21.290 + cat autorun-spdk.conf 00:00:21.290 SPDK_TEST_UNITTEST=1 00:00:21.290 SPDK_RUN_VALGRIND=0 00:00:21.290 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:21.290 SPDK_TEST_NVME=1 00:00:21.290 SPDK_TEST_BLOCKDEV=1 00:00:21.290 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:21.298 RUN_NIGHTLY=1 00:00:21.300 [Pipeline] } 00:00:21.319 [Pipeline] // stage 00:00:21.339 [Pipeline] stage 00:00:21.343 [Pipeline] { (Run VM) 00:00:21.363 [Pipeline] sh 00:00:21.646 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:21.646 + echo 'Start stage prepare_nvme.sh' 00:00:21.646 Start stage prepare_nvme.sh 00:00:21.646 + [[ -n 1 ]] 00:00:21.646 + disk_prefix=ex1 00:00:21.646 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:00:21.646 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:00:21.646 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:00:21.646 ++ SPDK_TEST_UNITTEST=1 00:00:21.646 ++ SPDK_RUN_VALGRIND=0 00:00:21.646 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:21.646 ++ SPDK_TEST_NVME=1 00:00:21.646 ++ SPDK_TEST_BLOCKDEV=1 00:00:21.646 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:21.646 ++ RUN_NIGHTLY=1 00:00:21.646 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:00:21.646 + nvme_files=() 00:00:21.646 + declare -A nvme_files 00:00:21.646 + backend_dir=/var/lib/libvirt/images/backends 00:00:21.646 + nvme_files['nvme.img']=5G 00:00:21.646 + nvme_files['nvme-cmb.img']=5G 00:00:21.646 + nvme_files['nvme-multi0.img']=4G 00:00:21.646 + nvme_files['nvme-multi1.img']=4G 00:00:21.646 + nvme_files['nvme-multi2.img']=4G 00:00:21.646 + nvme_files['nvme-openstack.img']=8G 00:00:21.646 + nvme_files['nvme-zns.img']=5G 00:00:21.646 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:21.646 + (( SPDK_TEST_FTL == 1 )) 00:00:21.646 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:21.646 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:21.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:21.646 + for nvme in "${!nvme_files[@]}" 00:00:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:21.905 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:21.905 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:22.163 + echo 'End stage prepare_nvme.sh' 00:00:22.163 End stage prepare_nvme.sh 00:00:22.176 [Pipeline] sh 00:00:22.455 + DISTRO=freebsd13 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:22.455 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f freebsd13 00:00:22.455 00:00:22.455 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:00:22.455 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:00:22.455 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:00:22.455 HELP=0 00:00:22.455 DRY_RUN=0 00:00:22.455 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:00:22.455 NVME_DISKS_TYPE=nvme, 00:00:22.455 NVME_AUTO_CREATE=0 00:00:22.455 NVME_DISKS_NAMESPACES=, 00:00:22.455 NVME_CMB=, 00:00:22.455 NVME_PMR=, 00:00:22.455 NVME_ZNS=, 00:00:22.455 NVME_MS=, 00:00:22.455 NVME_FDP=, 00:00:22.455 SPDK_VAGRANT_DISTRO=freebsd13 00:00:22.455 SPDK_VAGRANT_VMCPU=10 00:00:22.455 SPDK_VAGRANT_VMRAM=14336 00:00:22.455 SPDK_VAGRANT_PROVIDER=libvirt 00:00:22.455 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:22.455 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:22.455 SPDK_OPENSTACK_NETWORK=0 00:00:22.455 VAGRANT_PACKAGE_BOX=0 00:00:22.455 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:22.455 FORCE_DISTRO=true 00:00:22.455 VAGRANT_BOX_VERSION= 00:00:22.455 EXTRA_VAGRANTFILES= 00:00:22.455 NIC_MODEL=virtio 00:00:22.455 00:00:22.455 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:00:22.455 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:00:24.989 Bringing machine 'default' up with 'libvirt' provider... 00:00:25.249 ==> default: Creating image (snapshot of base box volume). 00:00:25.249 ==> default: Creating domain with the following settings... 00:00:25.249 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1720617844_b586fece4149a8326f3f 00:00:25.249 ==> default: -- Domain type: kvm 00:00:25.249 ==> default: -- Cpus: 10 00:00:25.249 ==> default: -- Feature: acpi 00:00:25.249 ==> default: -- Feature: apic 00:00:25.249 ==> default: -- Feature: pae 00:00:25.249 ==> default: -- Memory: 14336M 00:00:25.249 ==> default: -- Memory Backing: hugepages: 00:00:25.249 ==> default: -- Management MAC: 00:00:25.249 ==> default: -- Loader: 00:00:25.249 ==> default: -- Nvram: 00:00:25.249 ==> default: -- Base box: spdk/freebsd13 00:00:25.249 ==> default: -- Storage pool: default 00:00:25.249 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1720617844_b586fece4149a8326f3f.img (32G) 00:00:25.249 ==> default: -- Volume Cache: default 00:00:25.249 ==> default: -- Kernel: 00:00:25.249 ==> default: -- Initrd: 00:00:25.249 ==> default: -- Graphics Type: vnc 00:00:25.249 ==> default: -- Graphics Port: -1 00:00:25.249 ==> default: -- Graphics IP: 127.0.0.1 00:00:25.249 ==> default: -- Graphics Password: Not defined 00:00:25.249 ==> default: -- Video Type: cirrus 00:00:25.249 ==> default: -- Video VRAM: 9216 00:00:25.249 ==> default: -- Sound Type: 00:00:25.249 ==> default: -- Keymap: en-us 00:00:25.249 ==> default: -- TPM Path: 00:00:25.249 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:25.249 ==> default: -- Command line args: 00:00:25.249 ==> default: -> value=-device, 00:00:25.249 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:25.249 ==> default: -> value=-drive, 00:00:25.249 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:25.249 ==> default: -> value=-device, 00:00:25.249 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:25.508 ==> default: Creating shared folders metadata... 00:00:25.508 ==> default: Starting domain. 00:00:26.887 ==> default: Waiting for domain to get an IP address... 00:00:48.823 ==> default: Waiting for SSH to become available... 00:01:01.022 ==> default: Configuring and enabling network interfaces... 00:01:04.315 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.603 ==> default: Mounting SSHFS shared folder... 00:01:10.170 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.170 ==> default: Checking Mount.. 00:01:10.738 ==> default: Folder Successfully Mounted! 00:01:10.738 ==> default: Running provisioner: file... 00:01:11.306 default: ~/.gitconfig => .gitconfig 00:01:11.564 00:01:11.564 SUCCESS! 00:01:11.564 00:01:11.564 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:01:11.564 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.564 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:01:11.564 00:01:11.572 [Pipeline] } 00:01:11.588 [Pipeline] // stage 00:01:11.596 [Pipeline] dir 00:01:11.596 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:01:11.598 [Pipeline] { 00:01:11.608 [Pipeline] catchError 00:01:11.610 [Pipeline] { 00:01:11.621 [Pipeline] sh 00:01:11.901 + vagrant ssh-config --host vagrant 00:01:11.901 + sed -ne /^Host/,$p 00:01:11.901 + tee ssh_conf 00:01:14.448 Host vagrant 00:01:14.448 HostName 192.168.121.210 00:01:14.448 User vagrant 00:01:14.448 Port 22 00:01:14.448 UserKnownHostsFile /dev/null 00:01:14.448 StrictHostKeyChecking no 00:01:14.448 PasswordAuthentication no 00:01:14.448 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:01:14.448 IdentitiesOnly yes 00:01:14.448 LogLevel FATAL 00:01:14.448 ForwardAgent yes 00:01:14.448 ForwardX11 yes 00:01:14.448 00:01:14.459 [Pipeline] withEnv 00:01:14.461 [Pipeline] { 00:01:14.475 [Pipeline] sh 00:01:14.755 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.755 source /etc/os-release 00:01:14.755 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.755 # Minimal, systemd-like check. 00:01:14.755 if [[ -e /.dockerenv ]]; then 00:01:14.755 # Clear garbage from the node's name: 00:01:14.755 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.755 # $HOSTNAME is the actual container id 00:01:14.755 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.755 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.755 # We can assume this is a mount from a host where container is running, 00:01:14.755 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.755 container="$(< /etc/hostname) ($agent)" 00:01:14.755 else 00:01:14.755 # Fallback 00:01:14.755 container=$agent 00:01:14.755 fi 00:01:14.755 fi 00:01:14.755 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.755 00:01:14.768 [Pipeline] } 00:01:14.787 [Pipeline] // withEnv 00:01:14.795 [Pipeline] setCustomBuildProperty 00:01:14.807 [Pipeline] stage 00:01:14.808 [Pipeline] { (Tests) 00:01:14.824 [Pipeline] sh 00:01:15.141 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.155 [Pipeline] sh 00:01:15.436 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.453 [Pipeline] timeout 00:01:15.453 Timeout set to expire in 1 hr 30 min 00:01:15.455 [Pipeline] { 00:01:15.473 [Pipeline] sh 00:01:15.758 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:16.327 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:16.341 [Pipeline] sh 00:01:16.625 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.643 [Pipeline] sh 00:01:16.930 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.954 [Pipeline] sh 00:01:17.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:01:17.237 ++ readlink -f spdk_repo 00:01:17.237 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:01:17.237 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:01:17.237 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:01:17.237 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:01:17.237 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:01:17.237 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:01:17.237 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:01:17.237 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:01:17.237 + cd /usr/home/vagrant/spdk_repo 00:01:17.237 + source /etc/os-release 00:01:17.237 ++ NAME=FreeBSD 00:01:17.237 ++ VERSION=13.2-RELEASE 00:01:17.237 ++ VERSION_ID=13.2 00:01:17.237 ++ ID=freebsd 00:01:17.237 ++ ANSI_COLOR='0;31' 00:01:17.237 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:01:17.237 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:01:17.237 ++ HOME_URL=https://FreeBSD.org/ 00:01:17.237 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:01:17.237 + uname -a 00:01:17.237 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:01:17.237 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:17.497 Contigmem (not present) 00:01:17.497 Buffer Size: not set 00:01:17.497 Num Buffers: not set 00:01:17.497 00:01:17.497 00:01:17.497 Type BDF Vendor Device Driver 00:01:17.497 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:01:17.497 + rm -f /tmp/spdk-ld-path 00:01:17.497 + source autorun-spdk.conf 00:01:17.497 ++ SPDK_TEST_UNITTEST=1 00:01:17.497 ++ SPDK_RUN_VALGRIND=0 00:01:17.497 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.497 ++ SPDK_TEST_NVME=1 00:01:17.497 ++ SPDK_TEST_BLOCKDEV=1 00:01:17.497 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.497 ++ RUN_NIGHTLY=1 00:01:17.497 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.497 + [[ -n '' ]] 00:01:17.497 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:01:17.497 + for M in /var/spdk/build-*-manifest.txt 00:01:17.497 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.497 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:01:17.497 + for M in /var/spdk/build-*-manifest.txt 00:01:17.497 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.497 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:01:17.497 ++ uname 00:01:17.497 + [[ FreeBSD == \L\i\n\u\x ]] 00:01:17.497 + dmesg_pid=1261 00:01:17.497 + tail -F /var/log/messages 00:01:17.497 + [[ FreeBSD == FreeBSD ]] 00:01:17.497 + export LC_ALL=C LC_CTYPE=C 00:01:17.497 + LC_ALL=C 00:01:17.497 + LC_CTYPE=C 00:01:17.497 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.497 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.497 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.497 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.497 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.497 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.497 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.497 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:17.497 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:17.497 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:17.497 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.497 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.497 Test configuration: 00:01:17.497 SPDK_TEST_UNITTEST=1 00:01:17.497 SPDK_RUN_VALGRIND=0 00:01:17.497 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.497 SPDK_TEST_NVME=1 00:01:17.497 SPDK_TEST_BLOCKDEV=1 00:01:17.497 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.755 RUN_NIGHTLY=1 13:24:56 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.755 13:24:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.755 13:24:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.755 13:24:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.755 13:24:56 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:17.755 13:24:56 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:17.755 13:24:56 -- paths/export.sh@4 -- $ export PATH 00:01:17.755 13:24:56 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:17.756 13:24:56 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:01:17.756 13:24:56 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:17.756 13:24:56 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720617896.XXXXXX 00:01:17.756 13:24:56 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720617896.XXXXXX.kd7wxUMN 00:01:17.756 13:24:56 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:17.756 13:24:56 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:17.756 13:24:56 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.756 13:24:56 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.756 13:24:56 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.756 13:24:56 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:17.756 13:24:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:17.756 13:24:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.756 13:24:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:01:17.756 13:24:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.756 13:24:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.756 13:24:57 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:01:17.756 13:24:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.756 Wed Jul 10 13:24:57 UTC 2024 00:01:17.756 13:24:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.756 LTS-59-g4b94202c6 00:01:17.756 13:24:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.756 13:24:57 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:01:17.756 13:24:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.756 13:24:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.756 13:24:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.756 13:24:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.756 13:24:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.756 13:24:57 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:17.756 13:24:57 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:17.756 13:24:57 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:17.756 13:24:57 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:17.756 13:24:57 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:17.756 13:24:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.756 ************************************ 00:01:17.756 START TEST unittest_build 00:01:17.756 ************************************ 00:01:17.756 13:24:57 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:17.756 13:24:57 -- common/autobuild_common.sh@402 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:01:18.701 Notice: Vhost, rte_vhost library, virtio, and fuse 00:01:18.701 are only supported on Linux. Turning off default feature. 00:01:18.958 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.958 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:01:19.895 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:01:19.895 Using 'verbs' RDMA provider 00:01:32.101 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:44.304 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:44.564 Creating mk/config.mk...done. 00:01:44.564 Creating mk/cc.flags.mk...done. 00:01:44.564 Type 'gmake' to build. 00:01:44.564 13:25:23 -- common/autobuild_common.sh@403 -- $ gmake -j10 00:01:44.823 gmake[1]: Nothing to be done for 'all'. 00:01:48.113 ps: stdin: not a terminal 00:01:53.390 The Meson build system 00:01:53.390 Version: 1.3.1 00:01:53.390 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:01:53.390 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:53.390 Build type: native build 00:01:53.390 Program cat found: YES (/bin/cat) 00:01:53.390 Project name: DPDK 00:01:53.390 Project version: 23.11.0 00:01:53.390 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:01:53.390 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:01:53.390 Host machine cpu family: x86_64 00:01:53.390 Host machine cpu: x86_64 00:01:53.390 Message: ## Building in Developer Mode ## 00:01:53.390 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:01:53.390 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.390 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.390 Program python3 found: YES (/usr/local/bin/python3.9) 00:01:53.390 Program cat found: YES (/bin/cat) 00:01:53.390 Compiler for C supports arguments -march=native: YES 00:01:53.390 Checking for size of "void *" : 8 00:01:53.390 Checking for size of "void *" : 8 (cached) 00:01:53.390 Library m found: YES 00:01:53.390 Library numa found: NO 00:01:53.390 Library fdt found: NO 00:01:53.390 Library execinfo found: YES 00:01:53.390 Has header "execinfo.h" : YES 00:01:53.390 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:01:53.390 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.390 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.390 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.390 Run-time dependency openssl found: YES 3.0.13 00:01:53.390 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:53.390 Library pcap found: YES 00:01:53.390 Has header "pcap.h" with dependency -lpcap: YES 00:01:53.390 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.390 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.390 Compiler for C supports arguments -Wformat: YES 00:01:53.390 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:53.390 Compiler for C supports arguments -Wformat-security: YES 00:01:53.390 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.390 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.390 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.390 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.390 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.390 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.390 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.390 Compiler for C supports arguments -Wundef: YES 00:01:53.390 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.390 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.390 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:53.390 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.390 Compiler for C supports arguments -mavx512f: YES 00:01:53.390 Checking if "AVX512 checking" compiles: YES 00:01:53.390 Fetching value of define "__SSE4_2__" : 1 00:01:53.390 Fetching value of define "__AES__" : 1 00:01:53.390 Fetching value of define "__AVX__" : 1 00:01:53.390 Fetching value of define "__AVX2__" : 1 00:01:53.390 Fetching value of define "__AVX512BW__" : 1 00:01:53.390 Fetching value of define "__AVX512CD__" : 1 00:01:53.390 Fetching value of define "__AVX512DQ__" : 1 00:01:53.390 Fetching value of define "__AVX512F__" : 1 00:01:53.390 Fetching value of define "__AVX512VL__" : 1 00:01:53.390 Fetching value of define "__PCLMUL__" : 1 00:01:53.390 Fetching value of define "__RDRND__" : 1 00:01:53.390 Fetching value of define "__RDSEED__" : 1 00:01:53.390 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.390 Fetching value of define "__znver1__" : (undefined) 00:01:53.390 Fetching value of define "__znver2__" : (undefined) 00:01:53.390 Fetching value of define "__znver3__" : (undefined) 00:01:53.390 Fetching value of define "__znver4__" : (undefined) 00:01:53.390 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:53.390 Message: lib/log: Defining dependency "log" 00:01:53.390 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.390 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.390 Checking if "Detect argument count for CPU_OR" compiles: YES 00:01:53.390 Checking for function "getentropy" : YES 00:01:53.390 Message: lib/eal: Defining dependency "eal" 00:01:53.390 Message: lib/ring: Defining dependency "ring" 00:01:53.390 Message: lib/rcu: Defining dependency "rcu" 00:01:53.390 Message: lib/mempool: Defining dependency "mempool" 00:01:53.390 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.390 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.390 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.390 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.390 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.390 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.390 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:53.390 Compiler for C supports arguments -mpclmul: YES 00:01:53.390 Compiler for C supports arguments -maes: YES 00:01:53.390 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.390 Compiler for C supports arguments -mavx512bw: YES 00:01:53.390 Compiler for C supports arguments -mavx512dq: YES 00:01:53.390 Compiler for C supports arguments -mavx512vl: YES 00:01:53.390 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.390 Compiler for C supports arguments -mavx2: YES 00:01:53.390 Compiler for C supports arguments -mavx: YES 00:01:53.390 Message: lib/net: Defining dependency "net" 00:01:53.390 Message: lib/meter: Defining dependency "meter" 00:01:53.390 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.390 Message: lib/pci: Defining dependency "pci" 00:01:53.390 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.390 Message: lib/hash: Defining dependency "hash" 00:01:53.390 Message: lib/timer: Defining dependency "timer" 00:01:53.390 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.390 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.390 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.390 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.390 Message: lib/reorder: Defining dependency "reorder" 00:01:53.390 Message: lib/security: Defining dependency "security" 00:01:53.390 Has header "linux/userfaultfd.h" : NO 00:01:53.390 Has header "linux/vduse.h" : NO 00:01:53.390 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:53.390 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.390 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.390 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.390 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.390 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.390 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.390 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:01:53.390 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.390 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.390 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.390 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:53.390 Configuring doxy-api-html.conf using configuration 00:01:53.391 Configuring doxy-api-man.conf using configuration 00:01:53.391 Program mandb found: NO 00:01:53.391 Program sphinx-build found: NO 00:01:53.391 Configuring rte_build_config.h using configuration 00:01:53.391 Message: 00:01:53.391 ================= 00:01:53.391 Applications Enabled 00:01:53.391 ================= 00:01:53.391 00:01:53.391 apps: 00:01:53.391 00:01:53.391 00:01:53.391 Message: 00:01:53.391 ================= 00:01:53.391 Libraries Enabled 00:01:53.391 ================= 00:01:53.391 00:01:53.391 libs: 00:01:53.391 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.391 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.391 cryptodev, dmadev, reorder, security, 00:01:53.391 00:01:53.391 Message: 00:01:53.391 =============== 00:01:53.391 Drivers Enabled 00:01:53.391 =============== 00:01:53.391 00:01:53.391 common: 00:01:53.391 00:01:53.391 bus: 00:01:53.391 pci, vdev, 00:01:53.391 mempool: 00:01:53.391 ring, 00:01:53.391 dma: 00:01:53.391 00:01:53.391 net: 00:01:53.391 00:01:53.391 crypto: 00:01:53.391 00:01:53.391 compress: 00:01:53.391 00:01:53.391 00:01:53.391 Message: 00:01:53.391 ================= 00:01:53.391 Content Skipped 00:01:53.391 ================= 00:01:53.391 00:01:53.391 apps: 00:01:53.391 dumpcap: explicitly disabled via build config 00:01:53.391 graph: explicitly disabled via build config 00:01:53.391 pdump: explicitly disabled via build config 00:01:53.391 proc-info: explicitly disabled via build config 00:01:53.391 test-acl: explicitly disabled via build config 00:01:53.391 test-bbdev: explicitly disabled via build config 00:01:53.391 test-cmdline: explicitly disabled via build config 00:01:53.391 test-compress-perf: explicitly disabled via build config 00:01:53.391 test-crypto-perf: explicitly disabled via build config 00:01:53.391 test-dma-perf: explicitly disabled via build config 00:01:53.391 test-eventdev: explicitly disabled via build config 00:01:53.391 test-fib: explicitly disabled via build config 00:01:53.391 test-flow-perf: explicitly disabled via build config 00:01:53.391 test-gpudev: explicitly disabled via build config 00:01:53.391 test-mldev: explicitly disabled via build config 00:01:53.391 test-pipeline: explicitly disabled via build config 00:01:53.391 test-pmd: explicitly disabled via build config 00:01:53.391 test-regex: explicitly disabled via build config 00:01:53.391 test-sad: explicitly disabled via build config 00:01:53.391 test-security-perf: explicitly disabled via build config 00:01:53.391 00:01:53.391 libs: 00:01:53.391 metrics: explicitly disabled via build config 00:01:53.391 acl: explicitly disabled via build config 00:01:53.391 bbdev: explicitly disabled via build config 00:01:53.391 bitratestats: explicitly disabled via build config 00:01:53.391 bpf: explicitly disabled via build config 00:01:53.391 cfgfile: explicitly disabled via build config 00:01:53.391 distributor: explicitly disabled via build config 00:01:53.391 efd: explicitly disabled via build config 00:01:53.391 eventdev: explicitly disabled via build config 00:01:53.391 dispatcher: explicitly disabled via build config 00:01:53.391 gpudev: explicitly disabled via build config 00:01:53.391 gro: explicitly disabled via build config 00:01:53.391 gso: explicitly disabled via build config 00:01:53.391 ip_frag: explicitly disabled via build config 00:01:53.391 jobstats: explicitly disabled via build config 00:01:53.391 latencystats: explicitly disabled via build config 00:01:53.391 lpm: explicitly disabled via build config 00:01:53.391 member: explicitly disabled via build config 00:01:53.391 pcapng: explicitly disabled via build config 00:01:53.391 power: only supported on Linux 00:01:53.391 rawdev: explicitly disabled via build config 00:01:53.391 regexdev: explicitly disabled via build config 00:01:53.391 mldev: explicitly disabled via build config 00:01:53.391 rib: explicitly disabled via build config 00:01:53.391 sched: explicitly disabled via build config 00:01:53.391 stack: explicitly disabled via build config 00:01:53.391 vhost: only supported on Linux 00:01:53.391 ipsec: explicitly disabled via build config 00:01:53.391 pdcp: explicitly disabled via build config 00:01:53.391 fib: explicitly disabled via build config 00:01:53.391 port: explicitly disabled via build config 00:01:53.391 pdump: explicitly disabled via build config 00:01:53.391 table: explicitly disabled via build config 00:01:53.391 pipeline: explicitly disabled via build config 00:01:53.391 graph: explicitly disabled via build config 00:01:53.391 node: explicitly disabled via build config 00:01:53.391 00:01:53.391 drivers: 00:01:53.391 common/cpt: not in enabled drivers build config 00:01:53.391 common/dpaax: not in enabled drivers build config 00:01:53.391 common/iavf: not in enabled drivers build config 00:01:53.391 common/idpf: not in enabled drivers build config 00:01:53.391 common/mvep: not in enabled drivers build config 00:01:53.391 common/octeontx: not in enabled drivers build config 00:01:53.391 bus/auxiliary: not in enabled drivers build config 00:01:53.391 bus/cdx: not in enabled drivers build config 00:01:53.391 bus/dpaa: not in enabled drivers build config 00:01:53.391 bus/fslmc: not in enabled drivers build config 00:01:53.391 bus/ifpga: not in enabled drivers build config 00:01:53.391 bus/platform: not in enabled drivers build config 00:01:53.391 bus/vmbus: not in enabled drivers build config 00:01:53.391 common/cnxk: not in enabled drivers build config 00:01:53.391 common/mlx5: not in enabled drivers build config 00:01:53.391 common/nfp: not in enabled drivers build config 00:01:53.391 common/qat: not in enabled drivers build config 00:01:53.391 common/sfc_efx: not in enabled drivers build config 00:01:53.391 mempool/bucket: not in enabled drivers build config 00:01:53.391 mempool/cnxk: not in enabled drivers build config 00:01:53.391 mempool/dpaa: not in enabled drivers build config 00:01:53.391 mempool/dpaa2: not in enabled drivers build config 00:01:53.391 mempool/octeontx: not in enabled drivers build config 00:01:53.391 mempool/stack: not in enabled drivers build config 00:01:53.391 dma/cnxk: not in enabled drivers build config 00:01:53.391 dma/dpaa: not in enabled drivers build config 00:01:53.391 dma/dpaa2: not in enabled drivers build config 00:01:53.391 dma/hisilicon: not in enabled drivers build config 00:01:53.391 dma/idxd: not in enabled drivers build config 00:01:53.391 dma/ioat: not in enabled drivers build config 00:01:53.391 dma/skeleton: not in enabled drivers build config 00:01:53.391 net/af_packet: not in enabled drivers build config 00:01:53.391 net/af_xdp: not in enabled drivers build config 00:01:53.391 net/ark: not in enabled drivers build config 00:01:53.391 net/atlantic: not in enabled drivers build config 00:01:53.391 net/avp: not in enabled drivers build config 00:01:53.391 net/axgbe: not in enabled drivers build config 00:01:53.391 net/bnx2x: not in enabled drivers build config 00:01:53.391 net/bnxt: not in enabled drivers build config 00:01:53.391 net/bonding: not in enabled drivers build config 00:01:53.391 net/cnxk: not in enabled drivers build config 00:01:53.391 net/cpfl: not in enabled drivers build config 00:01:53.391 net/cxgbe: not in enabled drivers build config 00:01:53.391 net/dpaa: not in enabled drivers build config 00:01:53.391 net/dpaa2: not in enabled drivers build config 00:01:53.391 net/e1000: not in enabled drivers build config 00:01:53.391 net/ena: not in enabled drivers build config 00:01:53.391 net/enetc: not in enabled drivers build config 00:01:53.391 net/enetfec: not in enabled drivers build config 00:01:53.391 net/enic: not in enabled drivers build config 00:01:53.391 net/failsafe: not in enabled drivers build config 00:01:53.391 net/fm10k: not in enabled drivers build config 00:01:53.391 net/gve: not in enabled drivers build config 00:01:53.391 net/hinic: not in enabled drivers build config 00:01:53.391 net/hns3: not in enabled drivers build config 00:01:53.391 net/i40e: not in enabled drivers build config 00:01:53.391 net/iavf: not in enabled drivers build config 00:01:53.391 net/ice: not in enabled drivers build config 00:01:53.391 net/idpf: not in enabled drivers build config 00:01:53.391 net/igc: not in enabled drivers build config 00:01:53.391 net/ionic: not in enabled drivers build config 00:01:53.391 net/ipn3ke: not in enabled drivers build config 00:01:53.391 net/ixgbe: not in enabled drivers build config 00:01:53.391 net/mana: not in enabled drivers build config 00:01:53.391 net/memif: not in enabled drivers build config 00:01:53.391 net/mlx4: not in enabled drivers build config 00:01:53.391 net/mlx5: not in enabled drivers build config 00:01:53.391 net/mvneta: not in enabled drivers build config 00:01:53.391 net/mvpp2: not in enabled drivers build config 00:01:53.391 net/netvsc: not in enabled drivers build config 00:01:53.391 net/nfb: not in enabled drivers build config 00:01:53.391 net/nfp: not in enabled drivers build config 00:01:53.391 net/ngbe: not in enabled drivers build config 00:01:53.391 net/null: not in enabled drivers build config 00:01:53.391 net/octeontx: not in enabled drivers build config 00:01:53.391 net/octeon_ep: not in enabled drivers build config 00:01:53.391 net/pcap: not in enabled drivers build config 00:01:53.391 net/pfe: not in enabled drivers build config 00:01:53.391 net/qede: not in enabled drivers build config 00:01:53.391 net/ring: not in enabled drivers build config 00:01:53.391 net/sfc: not in enabled drivers build config 00:01:53.391 net/softnic: not in enabled drivers build config 00:01:53.391 net/tap: not in enabled drivers build config 00:01:53.391 net/thunderx: not in enabled drivers build config 00:01:53.391 net/txgbe: not in enabled drivers build config 00:01:53.391 net/vdev_netvsc: not in enabled drivers build config 00:01:53.391 net/vhost: not in enabled drivers build config 00:01:53.391 net/virtio: not in enabled drivers build config 00:01:53.391 net/vmxnet3: not in enabled drivers build config 00:01:53.391 raw/*: missing internal dependency, "rawdev" 00:01:53.391 crypto/armv8: not in enabled drivers build config 00:01:53.391 crypto/bcmfs: not in enabled drivers build config 00:01:53.391 crypto/caam_jr: not in enabled drivers build config 00:01:53.391 crypto/ccp: not in enabled drivers build config 00:01:53.391 crypto/cnxk: not in enabled drivers build config 00:01:53.391 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.391 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.391 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.391 crypto/mlx5: not in enabled drivers build config 00:01:53.391 crypto/mvsam: not in enabled drivers build config 00:01:53.391 crypto/nitrox: not in enabled drivers build config 00:01:53.391 crypto/null: not in enabled drivers build config 00:01:53.391 crypto/octeontx: not in enabled drivers build config 00:01:53.391 crypto/openssl: not in enabled drivers build config 00:01:53.391 crypto/scheduler: not in enabled drivers build config 00:01:53.391 crypto/uadk: not in enabled drivers build config 00:01:53.392 crypto/virtio: not in enabled drivers build config 00:01:53.392 compress/isal: not in enabled drivers build config 00:01:53.392 compress/mlx5: not in enabled drivers build config 00:01:53.392 compress/octeontx: not in enabled drivers build config 00:01:53.392 compress/zlib: not in enabled drivers build config 00:01:53.392 regex/*: missing internal dependency, "regexdev" 00:01:53.392 ml/*: missing internal dependency, "mldev" 00:01:53.392 vdpa/*: missing internal dependency, "vhost" 00:01:53.392 event/*: missing internal dependency, "eventdev" 00:01:53.392 baseband/*: missing internal dependency, "bbdev" 00:01:53.392 gpu/*: missing internal dependency, "gpudev" 00:01:53.392 00:01:53.392 00:01:53.392 Build targets in project: 81 00:01:53.392 00:01:53.392 DPDK 23.11.0 00:01:53.392 00:01:53.392 User defined options 00:01:53.392 buildtype : debug 00:01:53.392 default_library : static 00:01:53.392 libdir : lib 00:01:53.392 prefix : / 00:01:53.392 c_args : -fPIC -Werror 00:01:53.392 c_link_args : 00:01:53.392 cpu_instruction_set: native 00:01:53.392 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:53.392 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:53.392 enable_docs : false 00:01:53.392 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:53.392 enable_kmods : true 00:01:53.392 tests : false 00:01:53.392 00:01:53.392 Found ninja-1.11.1 at /usr/local/bin/ninja 00:01:53.392 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:53.392 [1/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.392 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.392 [3/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:01:53.392 [4/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.392 [5/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.392 [6/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.392 [7/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.392 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.392 [9/231] Linking static target lib/librte_log.a 00:01:53.392 [10/231] Linking static target lib/librte_kvargs.a 00:01:53.392 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.392 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.392 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.650 [14/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.650 [15/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.650 [16/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.650 [17/231] Linking static target lib/librte_telemetry.a 00:01:53.651 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.651 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.651 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.651 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.651 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.651 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.908 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.908 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.908 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.908 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.908 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.908 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.908 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.908 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.908 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.908 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.908 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.908 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.908 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.166 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.166 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.166 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.166 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.166 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.166 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.166 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.166 [44/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.166 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.166 [46/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.166 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:54.424 [48/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.424 [49/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.424 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.424 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:01:54.424 [52/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.424 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.424 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:01:54.424 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.424 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.424 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.424 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.683 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.683 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.683 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:01:54.683 [62/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.683 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:01:54.683 [64/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.683 [65/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:01:54.683 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:01:54.683 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:01:54.683 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:01:54.683 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:01:54.683 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:01:54.683 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:01:54.942 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.942 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.942 [74/231] Linking static target lib/librte_eal.a 00:01:54.942 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.942 [76/231] Linking static target lib/librte_ring.a 00:01:55.201 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.201 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.201 [79/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.201 [80/231] Linking static target lib/librte_rcu.a 00:01:55.201 [81/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.201 [82/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.201 [83/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.201 [84/231] Linking static target lib/librte_mempool.a 00:01:55.201 [85/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.201 [86/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.201 [87/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.201 [88/231] Linking target lib/librte_log.so.24.0 00:01:55.201 [89/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.201 [90/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.460 [91/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.460 [92/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:55.460 [93/231] Linking target lib/librte_kvargs.so.24.0 00:01:55.460 [94/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.460 [95/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.460 [96/231] Linking static target lib/librte_mbuf.a 00:01:55.460 [97/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.460 [98/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.460 [99/231] Linking target lib/librte_telemetry.so.24.0 00:01:55.460 [100/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.460 [101/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:55.460 [102/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.460 [103/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.460 [104/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.460 [105/231] Linking static target lib/librte_net.a 00:01:55.460 [106/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.460 [107/231] Linking static target lib/librte_meter.a 00:01:55.720 [108/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:55.720 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.720 [110/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.720 [111/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.978 [112/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.978 [113/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.978 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.978 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.979 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.237 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.237 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.237 [119/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.237 [120/231] Linking static target lib/librte_pci.a 00:01:56.237 [121/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.237 [122/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.237 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.237 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.237 [125/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.237 [126/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.237 [127/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.237 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.497 [129/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.497 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.497 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.497 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.497 [133/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.497 [134/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.497 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.497 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.497 [137/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.497 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.497 [139/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.497 [140/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.757 [141/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.757 [142/231] Linking static target lib/librte_ethdev.a 00:01:56.757 [143/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.757 [144/231] Linking static target lib/librte_cmdline.a 00:01:56.757 [145/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.757 [146/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.757 [147/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.757 [148/231] Linking static target lib/librte_timer.a 00:01:56.757 [149/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.757 [150/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.016 [151/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.016 [152/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.016 [153/231] Linking static target lib/librte_compressdev.a 00:01:57.016 [154/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.016 [155/231] Linking static target lib/librte_hash.a 00:01:57.016 [156/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.016 [157/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.016 [158/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.016 [159/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.276 [160/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.276 [161/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.276 [162/231] Linking static target lib/librte_dmadev.a 00:01:57.276 [163/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.276 [164/231] Linking static target lib/librte_reorder.a 00:01:57.276 [165/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.536 [166/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.536 [167/231] Linking static target lib/librte_security.a 00:01:57.536 [168/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.536 [169/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.536 [170/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.536 [171/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.536 [172/231] Linking static target lib/librte_cryptodev.a 00:01:57.536 [173/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.536 [174/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.536 [175/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.536 [176/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:01:57.536 [177/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.536 [178/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.536 [179/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.796 [180/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.796 [181/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.796 [182/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:57.796 [183/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.796 [184/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.796 [185/231] Linking static target drivers/librte_bus_pci.a 00:01:57.796 [186/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.796 [187/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.796 [188/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.796 [189/231] Linking static target drivers/librte_bus_vdev.a 00:01:57.796 [190/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.796 [191/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.056 [192/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.056 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.056 [194/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.056 [195/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.056 [196/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.056 [197/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.056 [198/231] Linking static target drivers/librte_mempool_ring.a 00:01:59.982 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:01:59.982 machine -> /usr/src/sys/amd64/include 00:01:59.982 x86 -> /usr/src/sys/x86/include 00:01:59.982 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:01:59.982 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:01:59.982 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:01:59.982 touch opt_global.h 00:01:59.983 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:01:59.983 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:01:59.983 :> export_syms 00:01:59.983 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:01:59.983 objcopy --strip-debug contigmem.ko 00:01:59.983 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:01:59.983 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:01:59.983 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:01:59.983 :> export_syms 00:01:59.983 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:01:59.983 objcopy --strip-debug nic_uio.ko 00:02:04.225 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.433 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.433 [203/231] Linking target lib/librte_eal.so.24.0 00:02:08.692 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:08.692 [205/231] Linking target lib/librte_dmadev.so.24.0 00:02:08.692 [206/231] Linking target drivers/librte_bus_vdev.so.24.0 00:02:08.692 [207/231] Linking target lib/librte_timer.so.24.0 00:02:08.692 [208/231] Linking target lib/librte_pci.so.24.0 00:02:08.692 [209/231] Linking target lib/librte_ring.so.24.0 00:02:08.692 [210/231] Linking target lib/librte_meter.so.24.0 00:02:08.692 [211/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.692 [212/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.692 [213/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.692 [214/231] Linking target lib/librte_mempool.so.24.0 00:02:08.692 [215/231] Linking target lib/librte_rcu.so.24.0 00:02:08.692 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:02:08.951 [217/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:08.951 [218/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:08.951 [219/231] Linking target lib/librte_mbuf.so.24.0 00:02:08.951 [220/231] Linking target drivers/librte_mempool_ring.so.24.0 00:02:08.951 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.209 [222/231] Linking target lib/librte_compressdev.so.24.0 00:02:09.209 [223/231] Linking target lib/librte_net.so.24.0 00:02:09.209 [224/231] Linking target lib/librte_reorder.so.24.0 00:02:09.209 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:02:09.209 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.209 [227/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.209 [228/231] Linking target lib/librte_hash.so.24.0 00:02:09.209 [229/231] Linking target lib/librte_cmdline.so.24.0 00:02:09.209 [230/231] Linking target lib/librte_ethdev.so.24.0 00:02:09.209 [231/231] Linking target lib/librte_security.so.24.0 00:02:09.209 INFO: autodetecting backend as ninja 00:02:09.209 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:10.144 CC lib/ut/ut.o 00:02:10.145 CC lib/log/log.o 00:02:10.145 CC lib/log/log_flags.o 00:02:10.145 CC lib/log/log_deprecated.o 00:02:10.145 CC lib/ut_mock/mock.o 00:02:10.145 LIB libspdk_ut_mock.a 00:02:10.145 LIB libspdk_log.a 00:02:10.145 LIB libspdk_ut.a 00:02:10.403 CXX lib/trace_parser/trace.o 00:02:10.403 CC lib/dma/dma.o 00:02:10.403 CC lib/ioat/ioat.o 00:02:10.403 CC lib/util/base64.o 00:02:10.404 CC lib/util/bit_array.o 00:02:10.404 CC lib/util/cpuset.o 00:02:10.404 CC lib/util/crc16.o 00:02:10.404 CC lib/util/crc32.o 00:02:10.404 CC lib/util/crc32c.o 00:02:10.404 CC lib/util/crc32_ieee.o 00:02:10.661 LIB libspdk_dma.a 00:02:10.661 CC lib/util/crc64.o 00:02:10.661 CC lib/util/dif.o 00:02:10.661 CC lib/util/fd.o 00:02:10.661 CC lib/util/file.o 00:02:10.661 CC lib/util/hexlify.o 00:02:10.661 CC lib/util/iov.o 00:02:10.661 LIB libspdk_ioat.a 00:02:10.661 CC lib/util/math.o 00:02:10.661 CC lib/util/pipe.o 00:02:10.661 CC lib/util/strerror_tls.o 00:02:10.661 CC lib/util/string.o 00:02:10.661 CC lib/util/uuid.o 00:02:10.661 CC lib/util/fd_group.o 00:02:10.661 CC lib/util/xor.o 00:02:10.661 CC lib/util/zipf.o 00:02:10.661 LIB libspdk_util.a 00:02:10.921 CC lib/conf/conf.o 00:02:10.921 CC lib/rdma/common.o 00:02:10.921 CC lib/rdma/rdma_verbs.o 00:02:10.921 CC lib/idxd/idxd.o 00:02:10.921 CC lib/idxd/idxd_user.o 00:02:10.921 CC lib/env_dpdk/env.o 00:02:10.921 CC lib/env_dpdk/memory.o 00:02:10.921 CC lib/json/json_parse.o 00:02:10.921 CC lib/vmd/vmd.o 00:02:10.921 CC lib/json/json_util.o 00:02:10.921 LIB libspdk_conf.a 00:02:10.921 CC lib/env_dpdk/pci.o 00:02:10.921 CC lib/vmd/led.o 00:02:10.921 CC lib/env_dpdk/init.o 00:02:10.921 LIB libspdk_rdma.a 00:02:10.921 CC lib/env_dpdk/threads.o 00:02:10.921 LIB libspdk_idxd.a 00:02:10.921 CC lib/env_dpdk/pci_ioat.o 00:02:10.921 CC lib/env_dpdk/pci_virtio.o 00:02:10.921 LIB libspdk_vmd.a 00:02:10.921 CC lib/json/json_write.o 00:02:11.180 CC lib/env_dpdk/pci_vmd.o 00:02:11.180 CC lib/env_dpdk/pci_idxd.o 00:02:11.180 CC lib/env_dpdk/pci_event.o 00:02:11.180 CC lib/env_dpdk/sigbus_handler.o 00:02:11.180 CC lib/env_dpdk/pci_dpdk.o 00:02:11.180 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.180 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.180 LIB libspdk_json.a 00:02:11.180 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.180 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.180 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.180 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.439 LIB libspdk_jsonrpc.a 00:02:11.439 LIB libspdk_trace_parser.a 00:02:11.439 LIB libspdk_env_dpdk.a 00:02:11.439 CC lib/rpc/rpc.o 00:02:11.698 LIB libspdk_rpc.a 00:02:11.698 CC lib/trace/trace_flags.o 00:02:11.698 CC lib/trace/trace.o 00:02:11.698 CC lib/trace/trace_rpc.o 00:02:11.698 CC lib/sock/sock.o 00:02:11.698 CC lib/sock/sock_rpc.o 00:02:11.698 CC lib/notify/notify_rpc.o 00:02:11.698 CC lib/notify/notify.o 00:02:11.958 LIB libspdk_trace.a 00:02:11.958 LIB libspdk_sock.a 00:02:11.958 LIB libspdk_notify.a 00:02:11.958 CC lib/thread/thread.o 00:02:11.958 CC lib/thread/iobuf.o 00:02:11.958 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.958 CC lib/nvme/nvme_ctrlr.o 00:02:11.958 CC lib/nvme/nvme_ns_cmd.o 00:02:11.958 CC lib/nvme/nvme_fabric.o 00:02:11.958 CC lib/nvme/nvme_ns.o 00:02:11.958 CC lib/nvme/nvme_pcie_common.o 00:02:11.958 CC lib/nvme/nvme_pcie.o 00:02:11.958 CC lib/nvme/nvme_qpair.o 00:02:12.218 CC lib/nvme/nvme.o 00:02:12.218 LIB libspdk_thread.a 00:02:12.218 CC lib/nvme/nvme_quirks.o 00:02:12.479 CC lib/nvme/nvme_transport.o 00:02:12.479 CC lib/nvme/nvme_discovery.o 00:02:12.479 CC lib/accel/accel.o 00:02:12.479 CC lib/blob/blobstore.o 00:02:12.479 CC lib/accel/accel_rpc.o 00:02:12.479 CC lib/init/json_config.o 00:02:12.479 CC lib/accel/accel_sw.o 00:02:12.479 CC lib/init/subsystem.o 00:02:12.479 CC lib/blob/request.o 00:02:12.479 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.479 CC lib/init/subsystem_rpc.o 00:02:12.738 CC lib/blob/zeroes.o 00:02:12.738 CC lib/init/rpc.o 00:02:12.738 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.738 CC lib/blob/blob_bs_dev.o 00:02:12.738 LIB libspdk_accel.a 00:02:12.738 CC lib/nvme/nvme_tcp.o 00:02:12.738 CC lib/nvme/nvme_opal.o 00:02:12.738 LIB libspdk_init.a 00:02:12.738 CC lib/nvme/nvme_io_msg.o 00:02:12.738 CC lib/bdev/bdev.o 00:02:12.738 CC lib/event/app.o 00:02:12.738 CC lib/event/reactor.o 00:02:12.738 CC lib/bdev/bdev_rpc.o 00:02:12.999 CC lib/event/log_rpc.o 00:02:12.999 CC lib/bdev/bdev_zone.o 00:02:12.999 CC lib/nvme/nvme_poll_group.o 00:02:12.999 CC lib/bdev/part.o 00:02:12.999 CC lib/event/app_rpc.o 00:02:12.999 CC lib/nvme/nvme_zns.o 00:02:12.999 LIB libspdk_blob.a 00:02:12.999 CC lib/event/scheduler_static.o 00:02:12.999 CC lib/nvme/nvme_cuse.o 00:02:12.999 CC lib/nvme/nvme_rdma.o 00:02:12.999 LIB libspdk_event.a 00:02:12.999 CC lib/blobfs/blobfs.o 00:02:12.999 CC lib/blobfs/tree.o 00:02:12.999 CC lib/bdev/scsi_nvme.o 00:02:12.999 CC lib/lvol/lvol.o 00:02:13.258 LIB libspdk_blobfs.a 00:02:13.258 LIB libspdk_lvol.a 00:02:13.258 LIB libspdk_bdev.a 00:02:13.518 LIB libspdk_nvme.a 00:02:13.518 CC lib/scsi/dev.o 00:02:13.518 CC lib/scsi/port.o 00:02:13.518 CC lib/scsi/lun.o 00:02:13.518 CC lib/scsi/scsi.o 00:02:13.518 CC lib/scsi/scsi_bdev.o 00:02:13.518 CC lib/scsi/scsi_pr.o 00:02:13.518 CC lib/scsi/scsi_rpc.o 00:02:13.518 CC lib/scsi/task.o 00:02:13.518 CC lib/nvmf/ctrlr.o 00:02:13.518 CC lib/nvmf/ctrlr_discovery.o 00:02:13.518 CC lib/nvmf/ctrlr_bdev.o 00:02:13.518 CC lib/nvmf/subsystem.o 00:02:13.518 CC lib/nvmf/nvmf.o 00:02:13.518 CC lib/nvmf/transport.o 00:02:13.518 CC lib/nvmf/nvmf_rpc.o 00:02:13.518 CC lib/nvmf/tcp.o 00:02:13.518 CC lib/nvmf/rdma.o 00:02:13.776 LIB libspdk_scsi.a 00:02:13.776 CC lib/iscsi/conn.o 00:02:13.776 CC lib/iscsi/init_grp.o 00:02:13.776 CC lib/iscsi/iscsi.o 00:02:13.776 CC lib/iscsi/md5.o 00:02:13.776 CC lib/iscsi/param.o 00:02:13.776 CC lib/iscsi/portal_grp.o 00:02:13.776 CC lib/iscsi/tgt_node.o 00:02:13.776 CC lib/iscsi/iscsi_subsystem.o 00:02:13.776 CC lib/iscsi/iscsi_rpc.o 00:02:13.776 CC lib/iscsi/task.o 00:02:14.034 LIB libspdk_nvmf.a 00:02:14.034 LIB libspdk_iscsi.a 00:02:14.292 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.292 CC module/blob/bdev/blob_bdev.o 00:02:14.292 CC module/accel/error/accel_error.o 00:02:14.292 CC module/accel/error/accel_error_rpc.o 00:02:14.292 CC module/accel/iaa/accel_iaa.o 00:02:14.292 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.292 CC module/accel/dsa/accel_dsa.o 00:02:14.292 CC module/accel/ioat/accel_ioat.o 00:02:14.292 CC module/sock/posix/posix.o 00:02:14.292 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.292 LIB libspdk_env_dpdk_rpc.a 00:02:14.292 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.292 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.292 LIB libspdk_accel_error.a 00:02:14.292 LIB libspdk_accel_iaa.a 00:02:14.292 LIB libspdk_blob_bdev.a 00:02:14.292 LIB libspdk_scheduler_dynamic.a 00:02:14.292 LIB libspdk_accel_ioat.a 00:02:14.292 LIB libspdk_accel_dsa.a 00:02:14.550 CC module/bdev/malloc/bdev_malloc.o 00:02:14.550 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.550 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.550 CC module/bdev/delay/vbdev_delay.o 00:02:14.550 CC module/bdev/gpt/gpt.o 00:02:14.550 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.550 CC module/bdev/nvme/bdev_nvme.o 00:02:14.550 CC module/bdev/error/vbdev_error.o 00:02:14.550 CC module/bdev/null/bdev_null.o 00:02:14.550 LIB libspdk_sock_posix.a 00:02:14.550 CC module/bdev/null/bdev_null_rpc.o 00:02:14.550 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.550 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.550 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.550 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.550 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.550 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.550 LIB libspdk_bdev_null.a 00:02:14.550 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.550 CC module/bdev/nvme/nvme_rpc.o 00:02:14.550 LIB libspdk_blobfs_bdev.a 00:02:14.550 LIB libspdk_bdev_gpt.a 00:02:14.550 LIB libspdk_bdev_error.a 00:02:14.550 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.550 LIB libspdk_bdev_passthru.a 00:02:14.550 LIB libspdk_bdev_delay.a 00:02:14.810 LIB libspdk_bdev_malloc.a 00:02:14.810 CC module/bdev/raid/bdev_raid.o 00:02:14.810 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.810 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.810 CC module/bdev/split/vbdev_split.o 00:02:14.810 CC module/bdev/aio/bdev_aio.o 00:02:14.810 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.810 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.810 LIB libspdk_bdev_lvol.a 00:02:14.810 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.810 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.810 CC module/bdev/raid/raid0.o 00:02:14.810 CC module/bdev/raid/raid1.o 00:02:14.810 CC module/bdev/raid/concat.o 00:02:14.810 LIB libspdk_bdev_aio.a 00:02:14.810 LIB libspdk_bdev_zone_block.a 00:02:14.810 LIB libspdk_bdev_nvme.a 00:02:14.810 LIB libspdk_bdev_split.a 00:02:15.069 LIB libspdk_bdev_raid.a 00:02:15.329 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.329 CC module/event/subsystems/sock/sock.o 00:02:15.329 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.329 CC module/event/subsystems/vmd/vmd.o 00:02:15.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.329 LIB libspdk_event_sock.a 00:02:15.329 LIB libspdk_event_vmd.a 00:02:15.329 LIB libspdk_event_scheduler.a 00:02:15.329 LIB libspdk_event_iobuf.a 00:02:15.588 CC module/event/subsystems/accel/accel.o 00:02:15.588 LIB libspdk_event_accel.a 00:02:15.847 CC module/event/subsystems/bdev/bdev.o 00:02:15.847 LIB libspdk_event_bdev.a 00:02:16.106 CC module/event/subsystems/scsi/scsi.o 00:02:16.106 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.106 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.106 LIB libspdk_event_scsi.a 00:02:16.106 LIB libspdk_event_nvmf.a 00:02:16.364 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.364 LIB libspdk_event_iscsi.a 00:02:16.622 TEST_HEADER include/spdk/accel.h 00:02:16.622 TEST_HEADER include/spdk/accel_module.h 00:02:16.622 TEST_HEADER include/spdk/assert.h 00:02:16.622 TEST_HEADER include/spdk/barrier.h 00:02:16.622 TEST_HEADER include/spdk/base64.h 00:02:16.622 TEST_HEADER include/spdk/bdev.h 00:02:16.622 TEST_HEADER include/spdk/bdev_module.h 00:02:16.622 TEST_HEADER include/spdk/bdev_zone.h 00:02:16.622 TEST_HEADER include/spdk/bit_array.h 00:02:16.622 TEST_HEADER include/spdk/bit_pool.h 00:02:16.622 TEST_HEADER include/spdk/blob.h 00:02:16.622 TEST_HEADER include/spdk/blob_bdev.h 00:02:16.622 CXX app/trace/trace.o 00:02:16.622 TEST_HEADER include/spdk/blobfs.h 00:02:16.622 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:16.622 TEST_HEADER include/spdk/conf.h 00:02:16.622 TEST_HEADER include/spdk/config.h 00:02:16.622 TEST_HEADER include/spdk/cpuset.h 00:02:16.622 TEST_HEADER include/spdk/crc16.h 00:02:16.622 TEST_HEADER include/spdk/crc32.h 00:02:16.622 TEST_HEADER include/spdk/crc64.h 00:02:16.622 TEST_HEADER include/spdk/dif.h 00:02:16.622 TEST_HEADER include/spdk/dma.h 00:02:16.622 TEST_HEADER include/spdk/endian.h 00:02:16.622 TEST_HEADER include/spdk/env.h 00:02:16.622 TEST_HEADER include/spdk/env_dpdk.h 00:02:16.622 CC test/event/event_perf/event_perf.o 00:02:16.622 TEST_HEADER include/spdk/event.h 00:02:16.622 TEST_HEADER include/spdk/fd.h 00:02:16.622 TEST_HEADER include/spdk/fd_group.h 00:02:16.622 CC examples/accel/perf/accel_perf.o 00:02:16.622 TEST_HEADER include/spdk/file.h 00:02:16.622 TEST_HEADER include/spdk/ftl.h 00:02:16.622 TEST_HEADER include/spdk/gpt_spec.h 00:02:16.622 TEST_HEADER include/spdk/hexlify.h 00:02:16.622 TEST_HEADER include/spdk/histogram_data.h 00:02:16.622 TEST_HEADER include/spdk/idxd.h 00:02:16.622 TEST_HEADER include/spdk/idxd_spec.h 00:02:16.622 TEST_HEADER include/spdk/init.h 00:02:16.622 TEST_HEADER include/spdk/ioat.h 00:02:16.622 TEST_HEADER include/spdk/ioat_spec.h 00:02:16.622 TEST_HEADER include/spdk/iscsi_spec.h 00:02:16.622 TEST_HEADER include/spdk/json.h 00:02:16.622 TEST_HEADER include/spdk/jsonrpc.h 00:02:16.622 TEST_HEADER include/spdk/likely.h 00:02:16.622 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.622 TEST_HEADER include/spdk/log.h 00:02:16.622 CC test/bdev/bdevio/bdevio.o 00:02:16.622 CC test/app/bdev_svc/bdev_svc.o 00:02:16.622 CC test/accel/dif/dif.o 00:02:16.622 CC test/dma/test_dma/test_dma.o 00:02:16.622 TEST_HEADER include/spdk/lvol.h 00:02:16.622 CC test/blobfs/mkfs/mkfs.o 00:02:16.622 TEST_HEADER include/spdk/memory.h 00:02:16.622 TEST_HEADER include/spdk/mmio.h 00:02:16.622 TEST_HEADER include/spdk/nbd.h 00:02:16.622 TEST_HEADER include/spdk/notify.h 00:02:16.622 TEST_HEADER include/spdk/nvme.h 00:02:16.622 TEST_HEADER include/spdk/nvme_intel.h 00:02:16.622 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:16.622 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:16.622 TEST_HEADER include/spdk/nvme_spec.h 00:02:16.622 TEST_HEADER include/spdk/nvme_zns.h 00:02:16.622 TEST_HEADER include/spdk/nvmf.h 00:02:16.622 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:16.622 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:16.622 TEST_HEADER include/spdk/nvmf_spec.h 00:02:16.622 TEST_HEADER include/spdk/nvmf_transport.h 00:02:16.622 TEST_HEADER include/spdk/opal.h 00:02:16.622 TEST_HEADER include/spdk/opal_spec.h 00:02:16.622 TEST_HEADER include/spdk/pci_ids.h 00:02:16.622 TEST_HEADER include/spdk/pipe.h 00:02:16.622 TEST_HEADER include/spdk/queue.h 00:02:16.622 TEST_HEADER include/spdk/reduce.h 00:02:16.622 TEST_HEADER include/spdk/rpc.h 00:02:16.622 TEST_HEADER include/spdk/scheduler.h 00:02:16.622 TEST_HEADER include/spdk/scsi.h 00:02:16.622 TEST_HEADER include/spdk/scsi_spec.h 00:02:16.622 TEST_HEADER include/spdk/sock.h 00:02:16.622 TEST_HEADER include/spdk/stdinc.h 00:02:16.622 TEST_HEADER include/spdk/string.h 00:02:16.622 LINK event_perf 00:02:16.622 TEST_HEADER include/spdk/thread.h 00:02:16.622 TEST_HEADER include/spdk/trace.h 00:02:16.622 TEST_HEADER include/spdk/trace_parser.h 00:02:16.623 TEST_HEADER include/spdk/tree.h 00:02:16.623 TEST_HEADER include/spdk/ublk.h 00:02:16.623 TEST_HEADER include/spdk/util.h 00:02:16.623 TEST_HEADER include/spdk/uuid.h 00:02:16.623 TEST_HEADER include/spdk/version.h 00:02:16.623 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:16.623 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:16.623 TEST_HEADER include/spdk/vhost.h 00:02:16.623 TEST_HEADER include/spdk/vmd.h 00:02:16.623 TEST_HEADER include/spdk/xor.h 00:02:16.623 TEST_HEADER include/spdk/zipf.h 00:02:16.623 CXX test/cpp_headers/accel.o 00:02:16.880 LINK bdev_svc 00:02:16.880 LINK mkfs 00:02:16.880 LINK accel_perf 00:02:16.880 LINK dif 00:02:16.880 LINK test_dma 00:02:16.880 CC test/event/reactor/reactor.o 00:02:16.880 LINK bdevio 00:02:16.880 CXX test/cpp_headers/accel_module.o 00:02:16.880 LINK reactor 00:02:16.880 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.880 CC test/app/histogram_perf/histogram_perf.o 00:02:16.880 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.880 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.880 CC test/app/jsoncat/jsoncat.o 00:02:16.880 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.880 CXX test/cpp_headers/assert.o 00:02:16.880 CC test/event/reactor_perf/reactor_perf.o 00:02:16.880 LINK histogram_perf 00:02:17.139 LINK jsoncat 00:02:17.139 LINK mem_callbacks 00:02:17.139 LINK nvme_fuzz 00:02:17.139 LINK reactor_perf 00:02:17.139 LINK hello_bdev 00:02:17.139 CXX test/cpp_headers/barrier.o 00:02:17.139 CC test/env/vtophys/vtophys.o 00:02:17.139 gmake[2]: Nothing to be done for 'all'. 00:02:17.139 CC app/trace_record/trace_record.o 00:02:17.139 LINK spdk_trace 00:02:17.139 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.139 LINK bdevperf 00:02:17.139 LINK vtophys 00:02:17.139 CC app/nvmf_tgt/nvmf_main.o 00:02:17.139 LINK spdk_trace_record 00:02:17.139 CC test/app/stub/stub.o 00:02:17.139 CC examples/blob/hello_world/hello_blob.o 00:02:17.139 CXX test/cpp_headers/base64.o 00:02:17.139 LINK env_dpdk_post_init 00:02:17.139 CC examples/blob/cli/blobcli.o 00:02:17.139 LINK nvmf_tgt 00:02:17.139 CXX test/cpp_headers/bdev.o 00:02:17.397 LINK stub 00:02:17.397 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.397 CC test/env/memory/memory_ut.o 00:02:17.397 LINK hello_blob 00:02:17.397 LINK iscsi_fuzz 00:02:17.397 CC test/nvme/aer/aer.o 00:02:17.397 CC test/env/pci/pci_ut.o 00:02:17.397 LINK blobcli 00:02:17.397 CC test/nvme/reset/reset.o 00:02:17.397 CXX test/cpp_headers/bdev_module.o 00:02:17.397 LINK iscsi_tgt 00:02:17.397 CC examples/ioat/perf/perf.o 00:02:17.397 LINK aer 00:02:17.397 CC test/nvme/sgl/sgl.o 00:02:17.397 LINK ioat_perf 00:02:17.397 LINK reset 00:02:17.397 CC app/spdk_tgt/spdk_tgt.o 00:02:17.397 LINK pci_ut 00:02:17.654 CC test/rpc_client/rpc_client_test.o 00:02:17.654 CXX test/cpp_headers/bdev_zone.o 00:02:17.654 CC examples/nvme/hello_world/hello_world.o 00:02:17.654 LINK sgl 00:02:17.654 CC app/spdk_lspci/spdk_lspci.o 00:02:17.654 CC app/spdk_nvme_perf/perf.o 00:02:17.654 CC examples/ioat/verify/verify.o 00:02:17.654 LINK rpc_client_test 00:02:17.654 LINK spdk_tgt 00:02:17.654 CC examples/sock/hello_world/hello_sock.o 00:02:17.654 LINK spdk_lspci 00:02:17.654 LINK hello_world 00:02:17.654 CC test/nvme/e2edp/nvme_dp.o 00:02:17.654 LINK verify 00:02:17.654 CC examples/nvme/reconnect/reconnect.o 00:02:17.654 CXX test/cpp_headers/bit_array.o 00:02:17.654 LINK memory_ut 00:02:17.654 CXX test/cpp_headers/bit_pool.o 00:02:17.654 CC test/nvme/overhead/overhead.o 00:02:17.654 LINK hello_sock 00:02:17.654 CC test/nvme/err_injection/err_injection.o 00:02:17.912 CC app/spdk_nvme_identify/identify.o 00:02:17.912 LINK spdk_nvme_perf 00:02:17.912 LINK nvme_dp 00:02:17.912 CC examples/vmd/lsvmd/lsvmd.o 00:02:17.912 LINK reconnect 00:02:17.912 CC test/nvme/startup/startup.o 00:02:17.912 LINK err_injection 00:02:17.912 CXX test/cpp_headers/blob.o 00:02:17.912 LINK overhead 00:02:17.912 LINK lsvmd 00:02:17.912 CXX test/cpp_headers/blob_bdev.o 00:02:17.912 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:17.912 CC test/thread/poller_perf/poller_perf.o 00:02:17.912 LINK startup 00:02:17.912 CC examples/nvmf/nvmf/nvmf.o 00:02:17.912 CC examples/nvme/arbitration/arbitration.o 00:02:17.912 CC examples/vmd/led/led.o 00:02:17.912 LINK spdk_nvme_identify 00:02:17.912 LINK poller_perf 00:02:17.912 LINK led 00:02:17.912 CC examples/util/zipf/zipf.o 00:02:18.169 CC test/nvme/reserve/reserve.o 00:02:18.169 CXX test/cpp_headers/blobfs.o 00:02:18.169 LINK arbitration 00:02:18.169 LINK nvmf 00:02:18.169 LINK nvme_manage 00:02:18.169 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.169 LINK zipf 00:02:18.169 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.169 CC test/thread/lock/spdk_lock.o 00:02:18.169 LINK reserve 00:02:18.169 CXX test/cpp_headers/conf.o 00:02:18.169 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:18.169 LINK spdk_nvme_discover 00:02:18.169 CC examples/nvme/hotplug/hotplug.o 00:02:18.169 CC app/spdk_top/spdk_top.o 00:02:18.169 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.169 LINK histogram_ut 00:02:18.169 CC examples/thread/thread/thread_ex.o 00:02:18.169 CC test/nvme/simple_copy/simple_copy.o 00:02:18.169 CXX test/cpp_headers/config.o 00:02:18.169 CXX test/cpp_headers/cpuset.o 00:02:18.427 CC test/nvme/connect_stress/connect_stress.o 00:02:18.427 LINK cmb_copy 00:02:18.427 CC examples/idxd/perf/perf.o 00:02:18.427 LINK hotplug 00:02:18.427 LINK simple_copy 00:02:18.427 LINK thread 00:02:18.427 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:18.427 LINK spdk_lock 00:02:18.427 LINK connect_stress 00:02:18.427 CXX test/cpp_headers/crc16.o 00:02:18.427 CC test/nvme/boot_partition/boot_partition.o 00:02:18.427 LINK idxd_perf 00:02:18.427 LINK spdk_top 00:02:18.427 CC examples/nvme/abort/abort.o 00:02:18.427 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.427 CXX test/cpp_headers/crc32.o 00:02:18.427 LINK boot_partition 00:02:18.427 CC app/fio/nvme/fio_plugin.o 00:02:18.427 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:18.427 CC test/nvme/compliance/nvme_compliance.o 00:02:18.427 LINK pmr_persistence 00:02:18.685 CXX test/cpp_headers/crc64.o 00:02:18.685 CC app/fio/bdev/fio_plugin.o 00:02:18.685 LINK abort 00:02:18.685 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:18.685 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:18.685 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.685 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:18.685 struct spdk_nvme_fdp_ruhs ruhs; 00:02:18.685 ^ 00:02:18.685 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:18.685 CXX test/cpp_headers/dif.o 00:02:18.685 LINK nvme_compliance 00:02:18.685 LINK fused_ordering 00:02:18.685 1 warning generated. 00:02:18.685 LINK spdk_nvme 00:02:18.685 LINK spdk_bdev 00:02:18.685 LINK blob_bdev_ut 00:02:18.685 LINK scsi_nvme_ut 00:02:18.942 CXX test/cpp_headers/dma.o 00:02:18.942 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:18.942 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.942 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:18.942 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:18.942 LINK accel_ut 00:02:18.942 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:18.942 LINK doorbell_aers 00:02:18.942 CXX test/cpp_headers/endian.o 00:02:18.942 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:18.942 LINK gpt_ut 00:02:18.942 LINK tree_ut 00:02:18.942 CC test/nvme/fdp/fdp.o 00:02:18.942 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:18.942 CXX test/cpp_headers/env.o 00:02:18.942 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:19.200 LINK fdp 00:02:19.200 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:19.200 CXX test/cpp_headers/env_dpdk.o 00:02:19.200 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:19.200 LINK vbdev_lvol_ut 00:02:19.200 LINK dma_ut 00:02:19.200 LINK part_ut 00:02:19.200 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:19.200 CXX test/cpp_headers/event.o 00:02:19.200 LINK blobfs_async_ut 00:02:19.200 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:19.200 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:19.457 CXX test/cpp_headers/fd.o 00:02:19.457 LINK bdev_raid_sb_ut 00:02:19.457 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:19.457 LINK blobfs_bdev_ut 00:02:19.457 LINK bdev_zone_ut 00:02:19.457 LINK blobfs_sync_ut 00:02:19.457 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:19.457 LINK bdev_raid_ut 00:02:19.457 LINK bdev_ut 00:02:19.457 CXX test/cpp_headers/fd_group.o 00:02:19.457 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:19.457 CC test/unit/lib/event/app.c/app_ut.o 00:02:19.457 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:19.457 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:19.457 LINK concat_ut 00:02:19.713 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:19.713 CXX test/cpp_headers/file.o 00:02:19.713 CXX test/cpp_headers/ftl.o 00:02:19.713 LINK bdev_ut 00:02:19.713 LINK raid1_ut 00:02:19.713 LINK app_ut 00:02:19.713 CXX test/cpp_headers/gpt_spec.o 00:02:19.713 LINK ioat_ut 00:02:19.713 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:19.713 CXX test/cpp_headers/hexlify.o 00:02:19.713 LINK vbdev_zone_block_ut 00:02:19.713 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:19.713 LINK reactor_ut 00:02:19.713 CXX test/cpp_headers/histogram_data.o 00:02:19.713 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:19.713 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:19.713 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:19.971 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:19.971 CXX test/cpp_headers/idxd.o 00:02:19.971 LINK jsonrpc_server_ut 00:02:19.971 CC test/unit/lib/log/log.c/log_ut.o 00:02:19.971 LINK json_util_ut 00:02:19.971 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:19.971 LINK conn_ut 00:02:19.971 LINK init_grp_ut 00:02:19.971 LINK log_ut 00:02:19.971 CXX test/cpp_headers/idxd_spec.o 00:02:19.971 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:19.971 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:19.971 LINK blob_ut 00:02:20.228 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:20.228 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:20.228 CXX test/cpp_headers/init.o 00:02:20.228 LINK json_write_ut 00:02:20.228 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:20.228 LINK json_parse_ut 00:02:20.228 LINK notify_ut 00:02:20.228 LINK param_ut 00:02:20.228 CXX test/cpp_headers/ioat.o 00:02:20.228 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:20.228 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:20.228 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:20.228 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:20.485 CXX test/cpp_headers/ioat_spec.o 00:02:20.485 LINK portal_grp_ut 00:02:20.485 LINK bdev_nvme_ut 00:02:20.485 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:20.485 LINK lvol_ut 00:02:20.485 CXX test/cpp_headers/iscsi_spec.o 00:02:20.485 LINK iscsi_ut 00:02:20.485 LINK dev_ut 00:02:20.485 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:20.485 LINK tgt_node_ut 00:02:20.485 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:20.485 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:20.485 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:20.485 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:20.485 LINK scsi_ut 00:02:20.485 CXX test/cpp_headers/json.o 00:02:20.743 LINK lun_ut 00:02:20.743 LINK nvme_ut 00:02:20.743 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:20.743 CXX test/cpp_headers/jsonrpc.o 00:02:20.743 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:20.743 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:20.743 CXX test/cpp_headers/likely.o 00:02:20.743 LINK scsi_bdev_ut 00:02:21.000 LINK ctrlr_bdev_ut 00:02:21.000 CXX test/cpp_headers/log.o 00:02:21.000 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:21.567 LINK ctrlr_discovery_ut 00:02:21.567 LINK subsystem_ut 00:02:21.567 LINK scsi_pr_ut 00:02:21.567 LINK nvme_ctrlr_cmd_ut 00:02:21.567 LINK nvme_ctrlr_ut 00:02:21.567 LINK ctrlr_ut 00:02:21.567 LINK tcp_ut 00:02:21.567 CXX test/cpp_headers/lvol.o 00:02:21.567 CXX test/cpp_headers/memory.o 00:02:21.567 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:21.567 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:21.567 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:21.567 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:21.567 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:21.567 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:21.567 CXX test/cpp_headers/mmio.o 00:02:21.567 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:21.567 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:21.825 LINK nvmf_ut 00:02:21.825 LINK base64_ut 00:02:21.825 CXX test/cpp_headers/nbd.o 00:02:21.825 CXX test/cpp_headers/notify.o 00:02:21.825 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:21.825 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:21.825 CXX test/cpp_headers/nvme.o 00:02:21.825 LINK posix_ut 00:02:21.825 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:21.825 LINK thread_ut 00:02:21.825 LINK nvme_ns_ut 00:02:21.825 LINK pci_event_ut 00:02:22.084 LINK sock_ut 00:02:22.084 LINK bit_array_ut 00:02:22.084 CXX test/cpp_headers/nvme_intel.o 00:02:22.084 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:22.084 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.084 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:22.084 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:22.084 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:22.084 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:22.084 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:22.084 LINK crc16_ut 00:02:22.084 LINK cpuset_ut 00:02:22.084 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.084 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:22.084 LINK subsystem_ut 00:02:22.084 LINK nvme_ns_cmd_ut 00:02:22.084 LINK rdma_ut 00:02:22.084 CXX test/cpp_headers/nvme_spec.o 00:02:22.084 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:22.084 LINK iobuf_ut 00:02:22.084 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:22.343 CXX test/cpp_headers/nvme_zns.o 00:02:22.343 LINK crc32_ieee_ut 00:02:22.343 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:22.343 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:22.343 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:22.343 LINK rpc_ut 00:02:22.343 LINK crc32c_ut 00:02:22.343 CXX test/cpp_headers/nvmf.o 00:02:22.343 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:22.343 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:22.343 LINK idxd_user_ut 00:02:22.343 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:22.343 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:22.602 CXX test/cpp_headers/nvmf_cmd.o 00:02:22.602 LINK crc64_ut 00:02:22.602 LINK transport_ut 00:02:22.602 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:22.602 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:22.602 LINK common_ut 00:02:22.602 LINK idxd_ut 00:02:22.602 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:22.602 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:22.602 LINK nvme_ns_ocssd_cmd_ut 00:02:22.602 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:22.602 CC test/unit/lib/util/math.c/math_ut.o 00:02:22.602 LINK iov_ut 00:02:22.602 LINK nvme_poll_group_ut 00:02:22.602 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:22.602 CXX test/cpp_headers/nvmf_spec.o 00:02:22.860 LINK math_ut 00:02:22.860 CXX test/cpp_headers/nvmf_transport.o 00:02:22.860 LINK nvme_pcie_ut 00:02:22.860 CC test/unit/lib/util/string.c/string_ut.o 00:02:22.860 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:22.860 LINK dif_ut 00:02:22.860 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:22.860 LINK nvme_qpair_ut 00:02:22.860 LINK pipe_ut 00:02:22.860 CXX test/cpp_headers/opal.o 00:02:22.860 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:22.860 LINK string_ut 00:02:22.860 CXX test/cpp_headers/opal_spec.o 00:02:22.860 CXX test/cpp_headers/pci_ids.o 00:02:22.860 LINK xor_ut 00:02:22.860 LINK nvme_quirks_ut 00:02:22.860 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:22.860 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:22.860 CXX test/cpp_headers/pipe.o 00:02:23.118 CXX test/cpp_headers/queue.o 00:02:23.118 CXX test/cpp_headers/reduce.o 00:02:23.118 CXX test/cpp_headers/rpc.o 00:02:23.118 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:23.118 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:23.118 CXX test/cpp_headers/scheduler.o 00:02:23.118 CXX test/cpp_headers/scsi.o 00:02:23.118 CXX test/cpp_headers/scsi_spec.o 00:02:23.118 LINK nvme_transport_ut 00:02:23.118 LINK nvme_opal_ut 00:02:23.376 CXX test/cpp_headers/sock.o 00:02:23.376 CXX test/cpp_headers/stdinc.o 00:02:23.376 LINK nvme_io_msg_ut 00:02:23.376 CXX test/cpp_headers/string.o 00:02:23.376 CXX test/cpp_headers/thread.o 00:02:23.376 CXX test/cpp_headers/trace.o 00:02:23.376 CXX test/cpp_headers/trace_parser.o 00:02:23.376 CXX test/cpp_headers/tree.o 00:02:23.376 CXX test/cpp_headers/ublk.o 00:02:23.376 LINK nvme_fabric_ut 00:02:23.376 CXX test/cpp_headers/util.o 00:02:23.376 CXX test/cpp_headers/uuid.o 00:02:23.376 CXX test/cpp_headers/version.o 00:02:23.376 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.376 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.376 CXX test/cpp_headers/vhost.o 00:02:23.376 LINK nvme_tcp_ut 00:02:23.376 LINK nvme_pcie_common_ut 00:02:23.376 CXX test/cpp_headers/vmd.o 00:02:23.376 CXX test/cpp_headers/xor.o 00:02:23.376 CXX test/cpp_headers/zipf.o 00:02:23.635 LINK nvme_rdma_ut 00:02:23.635 00:02:23.635 real 1m5.936s 00:02:23.635 user 3m30.159s 00:02:23.635 sys 0m48.192s 00:02:23.635 13:26:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:23.635 13:26:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.635 ************************************ 00:02:23.635 END TEST unittest_build 00:02:23.635 ************************************ 00:02:23.893 13:26:03 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:23.893 13:26:03 -- nvmf/common.sh@7 -- # uname -s 00:02:23.893 13:26:03 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:02:23.893 13:26:03 -- nvmf/common.sh@7 -- # return 0 00:02:23.893 13:26:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.893 13:26:03 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.893 13:26:03 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:02:23.893 13:26:03 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.893 13:26:03 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:23.893 13:26:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:23.893 13:26:03 -- common/autotest_common.sh@10 -- # set +x 00:02:23.893 13:26:03 -- spdk/autotest.sh@70 -- # create_test_list 00:02:23.893 13:26:03 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:23.893 13:26:03 -- common/autotest_common.sh@10 -- # set +x 00:02:23.893 13:26:03 -- spdk/autotest.sh@72 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:02:23.893 13:26:03 -- spdk/autotest.sh@72 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:02:23.893 13:26:03 -- spdk/autotest.sh@72 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:02:23.893 13:26:03 -- spdk/autotest.sh@73 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:02:23.893 13:26:03 -- spdk/autotest.sh@74 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:02:23.893 13:26:03 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:24.151 13:26:03 -- common/autotest_common.sh@1440 -- # uname 00:02:24.151 13:26:03 -- common/autotest_common.sh@1440 -- # '[' FreeBSD = FreeBSD ']' 00:02:24.151 13:26:03 -- common/autotest_common.sh@1441 -- # kldunload contigmem.ko 00:02:24.151 kldunload: can't find file contigmem.ko 00:02:24.151 13:26:03 -- common/autotest_common.sh@1441 -- # true 00:02:24.151 13:26:03 -- common/autotest_common.sh@1442 -- # '[' -n '' ']' 00:02:24.151 13:26:03 -- common/autotest_common.sh@1448 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:02:24.151 13:26:03 -- common/autotest_common.sh@1449 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:02:24.151 13:26:03 -- common/autotest_common.sh@1450 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:02:24.151 13:26:03 -- common/autotest_common.sh@1451 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:02:24.151 13:26:03 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:24.151 13:26:03 -- common/autotest_common.sh@1460 -- # uname 00:02:24.151 13:26:03 -- common/autotest_common.sh@1460 -- # [[ FreeBSD = FreeBSD ]] 00:02:24.151 13:26:03 -- common/autotest_common.sh@1460 -- # sysctl -n kern.ipc.maxsockbuf 00:02:24.151 13:26:03 -- common/autotest_common.sh@1460 -- # (( 2097152 < 4194304 )) 00:02:24.151 13:26:03 -- common/autotest_common.sh@1461 -- # sysctl kern.ipc.maxsockbuf=4194304 00:02:24.151 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:02:24.151 13:26:03 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:24.151 13:26:03 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:02:24.151 13:26:03 -- spdk/autotest.sh@83 -- # hash lcov 00:02:24.151 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 83: hash: lcov: not found 00:02:24.151 13:26:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:24.151 13:26:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:24.151 13:26:03 -- common/autotest_common.sh@10 -- # set +x 00:02:24.151 13:26:03 -- spdk/autotest.sh@102 -- # rm -f 00:02:24.151 13:26:03 -- spdk/autotest.sh@105 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:02:24.151 kldunload: can't find file contigmem.ko 00:02:24.151 kldunload: can't find file nic_uio.ko 00:02:24.151 13:26:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:24.151 13:26:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:24.151 13:26:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:24.151 13:26:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:24.151 13:26:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:24.151 13:26:03 -- spdk/autotest.sh@121 -- # grep -v p 00:02:24.151 13:26:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0ns1 00:02:24.151 13:26:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:24.151 13:26:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:24.151 13:26:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0ns1 00:02:24.151 13:26:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0ns1 pt 00:02:24.151 13:26:03 -- scripts/common.sh@389 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:02:24.151 nvme0ns1 is not a block device 00:02:24.151 13:26:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:02:24.151 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 393: blkid: command not found 00:02:24.151 13:26:03 -- scripts/common.sh@393 -- # pt= 00:02:24.151 13:26:03 -- scripts/common.sh@394 -- # return 1 00:02:24.151 13:26:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:02:24.151 1+0 records in 00:02:24.151 1+0 records out 00:02:24.151 1048576 bytes transferred in 0.005961 secs (175912106 bytes/sec) 00:02:24.151 13:26:03 -- spdk/autotest.sh@129 -- # sync 00:02:24.717 13:26:04 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:24.717 13:26:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:24.717 13:26:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:25.283 13:26:04 -- spdk/autotest.sh@135 -- # uname -s 00:02:25.283 13:26:04 -- spdk/autotest.sh@135 -- # '[' FreeBSD = Linux ']' 00:02:25.283 13:26:04 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.542 Contigmem (not present) 00:02:25.542 Buffer Size: not set 00:02:25.542 Num Buffers: not set 00:02:25.542 00:02:25.542 00:02:25.542 Type BDF Vendor Device Driver 00:02:25.542 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:02:25.542 13:26:04 -- spdk/autotest.sh@141 -- # uname -s 00:02:25.542 13:26:04 -- spdk/autotest.sh@141 -- # [[ FreeBSD == Linux ]] 00:02:25.542 13:26:04 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:02:25.542 13:26:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:25.542 13:26:04 -- common/autotest_common.sh@10 -- # set +x 00:02:25.542 13:26:04 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:02:25.542 13:26:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:25.542 13:26:04 -- common/autotest_common.sh@10 -- # set +x 00:02:25.542 13:26:04 -- spdk/autotest.sh@150 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:02:25.542 kldunload: can't find file nic_uio.ko 00:02:25.542 hw.nic_uio.bdfs="0:6:0" 00:02:25.542 hw.contigmem.num_buffers="8" 00:02:25.542 hw.contigmem.buffer_size="268435456" 00:02:26.110 13:26:05 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:02:26.110 13:26:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:26.110 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.110 13:26:05 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:02:26.110 13:26:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:02:26.110 13:26:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:02:26.110 13:26:05 -- common/autotest_common.sh@1562 -- # bdfs=() 00:02:26.110 13:26:05 -- common/autotest_common.sh@1562 -- # local bdfs 00:02:26.110 13:26:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:02:26.110 13:26:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:26.110 13:26:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:26.110 13:26:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:26.110 13:26:05 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:02:26.110 13:26:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:26.370 13:26:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:26.370 13:26:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:02:26.370 13:26:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:02:26.370 13:26:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:02:26.370 cat: /sys/bus/pci/devices/0000:00:06.0/device: No such file or directory 00:02:26.370 13:26:05 -- common/autotest_common.sh@1565 -- # device= 00:02:26.370 13:26:05 -- common/autotest_common.sh@1565 -- # true 00:02:26.370 13:26:05 -- common/autotest_common.sh@1566 -- # [[ '' == \0\x\0\a\5\4 ]] 00:02:26.370 13:26:05 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:02:26.370 13:26:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:02:26.370 13:26:05 -- common/autotest_common.sh@1578 -- # return 0 00:02:26.370 13:26:05 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:02:26.370 13:26:05 -- spdk/autotest.sh@162 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:26.370 13:26:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:26.370 13:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:26.370 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.370 ************************************ 00:02:26.370 START TEST unittest 00:02:26.370 ************************************ 00:02:26.370 13:26:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:26.370 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:26.370 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.370 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.370 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:26.370 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:02:26.370 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:26.370 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:02:26.370 ++ rpc_py=rpc_cmd 00:02:26.370 ++ set -e 00:02:26.370 ++ shopt -s nullglob 00:02:26.370 ++ shopt -s extglob 00:02:26.370 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:02:26.370 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:02:26.370 +++ CONFIG_WPDK_DIR= 00:02:26.370 +++ CONFIG_ASAN=n 00:02:26.370 +++ CONFIG_VBDEV_COMPRESS=n 00:02:26.370 +++ CONFIG_HAVE_EXECINFO_H=y 00:02:26.370 +++ CONFIG_USDT=n 00:02:26.370 +++ CONFIG_CUSTOMOCF=n 00:02:26.370 +++ CONFIG_PREFIX=/usr/local 00:02:26.370 +++ CONFIG_RBD=n 00:02:26.370 +++ CONFIG_LIBDIR= 00:02:26.370 +++ CONFIG_IDXD=y 00:02:26.370 +++ CONFIG_NVME_CUSE=n 00:02:26.370 +++ CONFIG_SMA=n 00:02:26.370 +++ CONFIG_VTUNE=n 00:02:26.370 +++ CONFIG_TSAN=n 00:02:26.370 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:02:26.370 +++ CONFIG_VFIO_USER_DIR= 00:02:26.370 +++ CONFIG_PGO_CAPTURE=n 00:02:26.370 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:02:26.370 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:26.370 +++ CONFIG_LTO=n 00:02:26.370 +++ CONFIG_ISCSI_INITIATOR=n 00:02:26.370 +++ CONFIG_CET=n 00:02:26.370 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:02:26.370 +++ CONFIG_OCF_PATH= 00:02:26.370 +++ CONFIG_RDMA_SET_TOS=y 00:02:26.370 +++ CONFIG_HAVE_ARC4RANDOM=y 00:02:26.370 +++ CONFIG_HAVE_LIBARCHIVE=n 00:02:26.370 +++ CONFIG_UBLK=n 00:02:26.370 +++ CONFIG_ISAL_CRYPTO=y 00:02:26.370 +++ CONFIG_OPENSSL_PATH= 00:02:26.370 +++ CONFIG_OCF=n 00:02:26.370 +++ CONFIG_FUSE=n 00:02:26.370 +++ CONFIG_VTUNE_DIR= 00:02:26.370 +++ CONFIG_FUZZER_LIB= 00:02:26.370 +++ CONFIG_FUZZER=n 00:02:26.370 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.370 +++ CONFIG_CRYPTO=n 00:02:26.370 +++ CONFIG_PGO_USE=n 00:02:26.370 +++ CONFIG_VHOST=n 00:02:26.370 +++ CONFIG_DAOS=n 00:02:26.370 +++ CONFIG_DPDK_INC_DIR= 00:02:26.370 +++ CONFIG_DAOS_DIR= 00:02:26.370 +++ CONFIG_UNIT_TESTS=y 00:02:26.370 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:02:26.370 +++ CONFIG_VIRTIO=n 00:02:26.370 +++ CONFIG_COVERAGE=n 00:02:26.370 +++ CONFIG_RDMA=y 00:02:26.370 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:02:26.370 +++ CONFIG_URING_PATH= 00:02:26.370 +++ CONFIG_XNVME=n 00:02:26.370 +++ CONFIG_VFIO_USER=n 00:02:26.370 +++ CONFIG_ARCH=native 00:02:26.370 +++ CONFIG_URING_ZNS=n 00:02:26.370 +++ CONFIG_WERROR=y 00:02:26.370 +++ CONFIG_HAVE_LIBBSD=n 00:02:26.370 +++ CONFIG_UBSAN=n 00:02:26.370 +++ CONFIG_IPSEC_MB_DIR= 00:02:26.370 +++ CONFIG_GOLANG=n 00:02:26.370 +++ CONFIG_ISAL=y 00:02:26.370 +++ CONFIG_IDXD_KERNEL=n 00:02:26.370 +++ CONFIG_DPDK_LIB_DIR= 00:02:26.370 +++ CONFIG_RDMA_PROV=verbs 00:02:26.370 +++ CONFIG_APPS=y 00:02:26.370 +++ CONFIG_SHARED=n 00:02:26.370 +++ CONFIG_FC_PATH= 00:02:26.370 +++ CONFIG_DPDK_PKG_CONFIG=n 00:02:26.370 +++ CONFIG_FC=n 00:02:26.370 +++ CONFIG_AVAHI=n 00:02:26.370 +++ CONFIG_FIO_PLUGIN=y 00:02:26.370 +++ CONFIG_RAID5F=n 00:02:26.370 +++ CONFIG_EXAMPLES=y 00:02:26.370 +++ CONFIG_TESTS=y 00:02:26.370 +++ CONFIG_CRYPTO_MLX5=n 00:02:26.370 +++ CONFIG_MAX_LCORES= 00:02:26.370 +++ CONFIG_IPSEC_MB=n 00:02:26.370 +++ CONFIG_DEBUG=y 00:02:26.370 +++ CONFIG_DPDK_COMPRESSDEV=n 00:02:26.370 +++ CONFIG_CROSS_PREFIX= 00:02:26.370 +++ CONFIG_URING=n 00:02:26.370 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:02:26.370 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:02:26.370 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:02:26.370 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:02:26.370 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:02:26.370 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:26.370 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:02:26.370 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:26.370 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:02:26.370 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:02:26.370 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:02:26.370 +++ VHOST_APP=("$_app_dir/vhost") 00:02:26.370 +++ DD_APP=("$_app_dir/spdk_dd") 00:02:26.370 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:02:26.370 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:02:26.370 +++ [[ #ifndef SPDK_CONFIG_H 00:02:26.370 #define SPDK_CONFIG_H 00:02:26.370 #define SPDK_CONFIG_APPS 1 00:02:26.370 #define SPDK_CONFIG_ARCH native 00:02:26.370 #undef SPDK_CONFIG_ASAN 00:02:26.370 #undef SPDK_CONFIG_AVAHI 00:02:26.370 #undef SPDK_CONFIG_CET 00:02:26.370 #undef SPDK_CONFIG_COVERAGE 00:02:26.370 #define SPDK_CONFIG_CROSS_PREFIX 00:02:26.370 #undef SPDK_CONFIG_CRYPTO 00:02:26.370 #undef SPDK_CONFIG_CRYPTO_MLX5 00:02:26.370 #undef SPDK_CONFIG_CUSTOMOCF 00:02:26.370 #undef SPDK_CONFIG_DAOS 00:02:26.370 #define SPDK_CONFIG_DAOS_DIR 00:02:26.370 #define SPDK_CONFIG_DEBUG 1 00:02:26.370 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:02:26.370 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.370 #define SPDK_CONFIG_DPDK_INC_DIR 00:02:26.370 #define SPDK_CONFIG_DPDK_LIB_DIR 00:02:26.370 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:02:26.370 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:26.370 #define SPDK_CONFIG_EXAMPLES 1 00:02:26.370 #undef SPDK_CONFIG_FC 00:02:26.370 #define SPDK_CONFIG_FC_PATH 00:02:26.370 #define SPDK_CONFIG_FIO_PLUGIN 1 00:02:26.370 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:02:26.370 #undef SPDK_CONFIG_FUSE 00:02:26.370 #undef SPDK_CONFIG_FUZZER 00:02:26.370 #define SPDK_CONFIG_FUZZER_LIB 00:02:26.370 #undef SPDK_CONFIG_GOLANG 00:02:26.370 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:02:26.370 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:02:26.371 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:02:26.371 #undef SPDK_CONFIG_HAVE_LIBBSD 00:02:26.371 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:02:26.371 #define SPDK_CONFIG_IDXD 1 00:02:26.371 #undef SPDK_CONFIG_IDXD_KERNEL 00:02:26.371 #undef SPDK_CONFIG_IPSEC_MB 00:02:26.371 #define SPDK_CONFIG_IPSEC_MB_DIR 00:02:26.371 #define SPDK_CONFIG_ISAL 1 00:02:26.371 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:02:26.371 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:02:26.371 #define SPDK_CONFIG_LIBDIR 00:02:26.371 #undef SPDK_CONFIG_LTO 00:02:26.371 #define SPDK_CONFIG_MAX_LCORES 00:02:26.371 #undef SPDK_CONFIG_NVME_CUSE 00:02:26.371 #undef SPDK_CONFIG_OCF 00:02:26.371 #define SPDK_CONFIG_OCF_PATH 00:02:26.371 #define SPDK_CONFIG_OPENSSL_PATH 00:02:26.371 #undef SPDK_CONFIG_PGO_CAPTURE 00:02:26.371 #undef SPDK_CONFIG_PGO_USE 00:02:26.371 #define SPDK_CONFIG_PREFIX /usr/local 00:02:26.371 #undef SPDK_CONFIG_RAID5F 00:02:26.371 #undef SPDK_CONFIG_RBD 00:02:26.371 #define SPDK_CONFIG_RDMA 1 00:02:26.371 #define SPDK_CONFIG_RDMA_PROV verbs 00:02:26.371 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:02:26.371 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:02:26.371 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:02:26.371 #undef SPDK_CONFIG_SHARED 00:02:26.371 #undef SPDK_CONFIG_SMA 00:02:26.371 #define SPDK_CONFIG_TESTS 1 00:02:26.371 #undef SPDK_CONFIG_TSAN 00:02:26.371 #undef SPDK_CONFIG_UBLK 00:02:26.371 #undef SPDK_CONFIG_UBSAN 00:02:26.371 #define SPDK_CONFIG_UNIT_TESTS 1 00:02:26.371 #undef SPDK_CONFIG_URING 00:02:26.371 #define SPDK_CONFIG_URING_PATH 00:02:26.371 #undef SPDK_CONFIG_URING_ZNS 00:02:26.371 #undef SPDK_CONFIG_USDT 00:02:26.371 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:02:26.371 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:02:26.371 #undef SPDK_CONFIG_VFIO_USER 00:02:26.371 #define SPDK_CONFIG_VFIO_USER_DIR 00:02:26.371 #undef SPDK_CONFIG_VHOST 00:02:26.371 #undef SPDK_CONFIG_VIRTIO 00:02:26.371 #undef SPDK_CONFIG_VTUNE 00:02:26.371 #define SPDK_CONFIG_VTUNE_DIR 00:02:26.371 #define SPDK_CONFIG_WERROR 1 00:02:26.371 #define SPDK_CONFIG_WPDK_DIR 00:02:26.371 #undef SPDK_CONFIG_XNVME 00:02:26.371 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:02:26.371 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:02:26.371 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.371 +++ [[ -e /bin/wpdk_common.sh ]] 00:02:26.371 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.371 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.371 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:26.371 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:26.371 ++++ export PATH 00:02:26.371 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:26.371 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:02:26.371 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:02:26.371 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:02:26.371 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:02:26.371 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:02:26.371 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:26.371 +++ TEST_TAG=N/A 00:02:26.371 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:02:26.371 ++ : 1 00:02:26.371 ++ export RUN_NIGHTLY 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_RUN_VALGRIND 00:02:26.371 ++ : 1 00:02:26.371 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:02:26.371 ++ : 1 00:02:26.371 ++ export SPDK_TEST_UNITTEST 00:02:26.371 ++ : 00:02:26.371 ++ export SPDK_TEST_AUTOBUILD 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_RELEASE_BUILD 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ISAL 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ISCSI 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ISCSI_INITIATOR 00:02:26.371 ++ : 1 00:02:26.371 ++ export SPDK_TEST_NVME 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVME_PMR 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVME_BP 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVME_CLI 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVME_CUSE 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVME_FDP 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVMF 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VFIOUSER 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VFIOUSER_QEMU 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_FUZZER 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_FUZZER_SHORT 00:02:26.371 ++ : rdma 00:02:26.371 ++ export SPDK_TEST_NVMF_TRANSPORT 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_RBD 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VHOST 00:02:26.371 ++ : 1 00:02:26.371 ++ export SPDK_TEST_BLOCKDEV 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_IOAT 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_BLOBFS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VHOST_INIT 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_LVOL 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VBDEV_COMPRESS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_RUN_ASAN 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_RUN_UBSAN 00:02:26.371 ++ : 00:02:26.371 ++ export SPDK_RUN_EXTERNAL_DPDK 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_RUN_NON_ROOT 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_CRYPTO 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_FTL 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_OCF 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_VMD 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_OPAL 00:02:26.371 ++ : 00:02:26.371 ++ export SPDK_TEST_NATIVE_DPDK 00:02:26.371 ++ : true 00:02:26.371 ++ export SPDK_AUTOTEST_X 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_RAID5 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_URING 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_USDT 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_USE_IGB_UIO 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_SCHEDULER 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_SCANBUILD 00:02:26.371 ++ : 00:02:26.371 ++ export SPDK_TEST_NVMF_NICS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_SMA 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_DAOS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_XNVME 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ACCEL_DSA 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ACCEL_IAA 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_ACCEL_IOAT 00:02:26.371 ++ : 00:02:26.371 ++ export SPDK_TEST_FUZZER_TARGET 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_TEST_NVMF_MDNS 00:02:26.371 ++ : 0 00:02:26.371 ++ export SPDK_JSONRPC_GO_CLIENT 00:02:26.371 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:02:26.371 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:02:26.371 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:02:26.371 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:02:26.371 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:26.371 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:26.371 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:26.371 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:26.371 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:02:26.371 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:02:26.371 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:02:26.371 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:02:26.371 ++ export PYTHONDONTWRITEBYTECODE=1 00:02:26.371 ++ PYTHONDONTWRITEBYTECODE=1 00:02:26.371 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:02:26.371 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:02:26.371 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:02:26.371 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:02:26.371 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:02:26.371 ++ rm -rf /var/tmp/asan_suppression_file 00:02:26.371 ++ cat 00:02:26.371 ++ echo leak:libfuse3.so 00:02:26.371 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:02:26.371 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:02:26.372 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:02:26.372 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:02:26.372 ++ '[' -z /var/spdk/dependencies ']' 00:02:26.372 ++ export DEPENDENCY_DIR 00:02:26.372 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:26.372 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:26.372 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:26.372 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:26.372 ++ export QEMU_BIN= 00:02:26.372 ++ QEMU_BIN= 00:02:26.372 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:26.372 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:26.372 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:02:26.372 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:02:26.372 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.372 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.372 ++ '[' 0 -eq 0 ']' 00:02:26.372 ++ export valgrind= 00:02:26.372 ++ valgrind= 00:02:26.372 +++ uname -s 00:02:26.372 ++ '[' FreeBSD = Linux ']' 00:02:26.372 +++ uname -s 00:02:26.372 ++ '[' FreeBSD = FreeBSD ']' 00:02:26.372 ++ MAKE=gmake 00:02:26.372 +++ sysctl -a 00:02:26.372 +++ grep -E -i hw.ncpu 00:02:26.372 +++ awk '{print $2}' 00:02:26.372 ++ MAKEFLAGS=-j10 00:02:26.372 ++ HUGEMEM=2048 00:02:26.372 ++ export HUGEMEM=2048 00:02:26.372 ++ HUGEMEM=2048 00:02:26.372 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:02:26.372 ++ NO_HUGE=() 00:02:26.372 ++ TEST_MODE= 00:02:26.372 ++ [[ -z '' ]] 00:02:26.372 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:02:26.372 ++ exec 00:02:26.372 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:02:26.372 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:02:26.372 ++ set_test_storage 2147483648 00:02:26.372 ++ [[ -v testdir ]] 00:02:26.372 ++ local requested_size=2147483648 00:02:26.372 ++ local mount target_dir 00:02:26.372 ++ local -A mounts fss sizes avails uses 00:02:26.372 ++ local source fs size avail mount use 00:02:26.372 ++ local storage_fallback storage_candidates 00:02:26.372 +++ mktemp -udt spdk.XXXXXX 00:02:26.372 ++ storage_fallback=/tmp/spdk.XXXXXX.jKoedBpo 00:02:26.372 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:02:26.372 ++ [[ -n '' ]] 00:02:26.372 ++ [[ -n '' ]] 00:02:26.372 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.jKoedBpo/tests/unit /tmp/spdk.XXXXXX.jKoedBpo 00:02:26.372 ++ requested_size=2214592512 00:02:26.372 ++ read -r source fs size use avail _ mount 00:02:26.372 +++ df -T 00:02:26.372 +++ grep -v Filesystem 00:02:26.372 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:02:26.372 ++ fss["$mount"]=ufs 00:02:26.372 ++ avails["$mount"]=17249083392 00:02:26.372 ++ sizes["$mount"]=31182712832 00:02:26.372 ++ uses["$mount"]=11439013888 00:02:26.372 ++ read -r source fs size use avail _ mount 00:02:26.372 ++ mounts["$mount"]=devfs 00:02:26.372 ++ fss["$mount"]=devfs 00:02:26.372 ++ avails["$mount"]=0 00:02:26.372 ++ sizes["$mount"]=1024 00:02:26.372 ++ uses["$mount"]=1024 00:02:26.372 ++ read -r source fs size use avail _ mount 00:02:26.372 ++ mounts["$mount"]=tmpfs 00:02:26.372 ++ fss["$mount"]=tmpfs 00:02:26.372 ++ avails["$mount"]=2147463168 00:02:26.372 ++ sizes["$mount"]=2147483648 00:02:26.372 ++ uses["$mount"]=20480 00:02:26.372 ++ read -r source fs size use avail _ mount 00:02:26.372 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:02:26.372 ++ fss["$mount"]=fusefs.sshfs 00:02:26.372 ++ avails["$mount"]=94486269952 00:02:26.372 ++ sizes["$mount"]=105088212992 00:02:26.372 ++ uses["$mount"]=5216509952 00:02:26.372 ++ read -r source fs size use avail _ mount 00:02:26.372 ++ printf '* Looking for test storage...\n' 00:02:26.372 * Looking for test storage... 00:02:26.372 ++ local target_space new_size 00:02:26.372 ++ for target_dir in "${storage_candidates[@]}" 00:02:26.372 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.372 +++ awk '$1 !~ /Filesystem/{print $6}' 00:02:26.372 ++ mount=/ 00:02:26.372 ++ target_space=17249083392 00:02:26.372 ++ (( target_space == 0 || target_space < requested_size )) 00:02:26.372 ++ (( target_space >= requested_size )) 00:02:26.372 ++ [[ ufs == tmpfs ]] 00:02:26.372 ++ [[ ufs == ramfs ]] 00:02:26.372 ++ [[ / == / ]] 00:02:26.372 ++ new_size=13653606400 00:02:26.372 ++ (( new_size * 100 / sizes[/] > 95 )) 00:02:26.372 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.372 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.372 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.372 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:26.372 ++ return 0 00:02:26.372 ++ set -o errtrace 00:02:26.372 ++ shopt -s extdebug 00:02:26.372 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:02:26.372 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:02:26.372 13:26:05 -- common/autotest_common.sh@1672 -- # true 00:02:26.372 13:26:05 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:02:26.372 13:26:05 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:02:26.372 13:26:05 -- common/autotest_common.sh@29 -- # exec 00:02:26.372 13:26:05 -- common/autotest_common.sh@31 -- # xtrace_restore 00:02:26.372 13:26:05 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:02:26.372 13:26:05 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:02:26.372 13:26:05 -- common/autotest_common.sh@18 -- # set -x 00:02:26.372 13:26:05 -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:02:26.372 13:26:05 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:02:26.372 13:26:05 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:02:26.372 13:26:05 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:02:26.372 13:26:05 -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:02:26.372 13:26:05 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:02:26.372 13:26:05 -- unit/unittest.sh@179 -- # hash lcov 00:02:26.372 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:02:26.372 13:26:05 -- unit/unittest.sh@182 -- # cov_avail=no 00:02:26.372 13:26:05 -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:02:26.372 13:26:05 -- unit/unittest.sh@206 -- # uname -m 00:02:26.372 13:26:05 -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:02:26.372 13:26:05 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:02:26.372 13:26:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:26.372 13:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:26.372 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.372 ************************************ 00:02:26.372 START TEST unittest_pci_event 00:02:26.372 ************************************ 00:02:26.372 13:26:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:02:26.372 00:02:26.372 00:02:26.372 CUnit - A unit testing framework for C - Version 2.1-3 00:02:26.372 http://cunit.sourceforge.net/ 00:02:26.372 00:02:26.372 00:02:26.372 Suite: pci_event 00:02:26.372 Test: test_pci_parse_event ...passed 00:02:26.372 00:02:26.372 Run Summary: Type Total Ran Passed Failed Inactive 00:02:26.372 suites 1 1 n/a 0 0 00:02:26.372 tests 1 1 1 0 0 00:02:26.372 asserts 1 1 1 0 n/a 00:02:26.372 00:02:26.372 Elapsed time = 0.000 seconds 00:02:26.372 00:02:26.372 real 0m0.030s 00:02:26.372 user 0m0.004s 00:02:26.372 sys 0m0.020s 00:02:26.372 13:26:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.372 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.372 ************************************ 00:02:26.372 END TEST unittest_pci_event 00:02:26.372 ************************************ 00:02:26.633 13:26:05 -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:02:26.633 13:26:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:26.633 13:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:26.633 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.633 ************************************ 00:02:26.633 START TEST unittest_include 00:02:26.633 ************************************ 00:02:26.633 13:26:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:02:26.633 00:02:26.633 00:02:26.633 CUnit - A unit testing framework for C - Version 2.1-3 00:02:26.633 http://cunit.sourceforge.net/ 00:02:26.633 00:02:26.633 00:02:26.633 Suite: histogram 00:02:26.633 Test: histogram_test ...passed 00:02:26.633 Test: histogram_merge ...passed 00:02:26.633 00:02:26.633 Run Summary: Type Total Ran Passed Failed Inactive 00:02:26.633 suites 1 1 n/a 0 0 00:02:26.633 tests 2 2 2 0 0 00:02:26.633 asserts 50 50 50 0 n/a 00:02:26.633 00:02:26.633 Elapsed time = 0.008 seconds 00:02:26.633 00:02:26.633 real 0m0.011s 00:02:26.633 user 0m0.002s 00:02:26.633 sys 0m0.010s 00:02:26.633 13:26:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.633 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.633 ************************************ 00:02:26.633 END TEST unittest_include 00:02:26.633 ************************************ 00:02:26.633 13:26:05 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:02:26.633 13:26:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:26.633 13:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:26.633 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:26.633 ************************************ 00:02:26.633 START TEST unittest_bdev 00:02:26.633 ************************************ 00:02:26.633 13:26:05 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:02:26.633 13:26:05 -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:02:26.633 00:02:26.633 00:02:26.633 CUnit - A unit testing framework for C - Version 2.1-3 00:02:26.633 http://cunit.sourceforge.net/ 00:02:26.633 00:02:26.633 00:02:26.633 Suite: bdev 00:02:26.633 Test: bytes_to_blocks_test ...passed 00:02:26.633 Test: num_blocks_test ...passed 00:02:26.633 Test: io_valid_test ...passed 00:02:26.633 Test: open_write_test ...[2024-07-10 13:26:05.838312] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:02:26.633 [2024-07-10 13:26:05.838621] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:02:26.633 [2024-07-10 13:26:05.838653] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:02:26.633 passed 00:02:26.633 Test: claim_test ...passed 00:02:26.633 Test: alias_add_del_test ...[2024-07-10 13:26:05.842406] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:02:26.633 [2024-07-10 13:26:05.842451] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:02:26.633 [2024-07-10 13:26:05.842467] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:02:26.633 passed 00:02:26.633 Test: get_device_stat_test ...passed 00:02:26.633 Test: bdev_io_types_test ...passed 00:02:26.633 Test: bdev_io_wait_test ...passed 00:02:26.633 Test: bdev_io_spans_split_test ...passed 00:02:26.633 Test: bdev_io_boundary_split_test ...passed 00:02:26.633 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-10 13:26:05.850329] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:02:26.633 passed 00:02:26.633 Test: bdev_io_mix_split_test ...passed 00:02:26.633 Test: bdev_io_split_with_io_wait ...passed 00:02:26.633 Test: bdev_io_write_unit_split_test ...[2024-07-10 13:26:05.854387] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:02:26.633 [2024-07-10 13:26:05.854419] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:02:26.633 [2024-07-10 13:26:05.854429] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:02:26.633 [2024-07-10 13:26:05.854444] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:02:26.633 passed 00:02:26.633 Test: bdev_io_alignment_with_boundary ...passed 00:02:26.633 Test: bdev_io_alignment ...passed 00:02:26.633 Test: bdev_histograms ...passed 00:02:26.633 Test: bdev_write_zeroes ...passed 00:02:26.633 Test: bdev_compare_and_write ...passed 00:02:26.633 Test: bdev_compare ...passed 00:02:26.633 Test: bdev_compare_emulated ...passed 00:02:26.633 Test: bdev_zcopy_write ...passed 00:02:26.633 Test: bdev_zcopy_read ...passed 00:02:26.633 Test: bdev_open_while_hotremove ...passed 00:02:26.633 Test: bdev_close_while_hotremove ...passed 00:02:26.633 Test: bdev_open_ext_test ...passed 00:02:26.633 Test: bdev_open_ext_unregister ...[2024-07-10 13:26:05.867099] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:02:26.633 [2024-07-10 13:26:05.867142] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:02:26.633 passed 00:02:26.633 Test: bdev_set_io_timeout ...passed 00:02:26.633 Test: bdev_set_qd_sampling ...passed 00:02:26.633 Test: lba_range_overlap ...passed 00:02:26.633 Test: lock_lba_range_check_ranges ...passed 00:02:26.633 Test: lock_lba_range_with_io_outstanding ...passed 00:02:26.633 Test: lock_lba_range_overlapped ...passed 00:02:26.633 Test: bdev_quiesce ...[2024-07-10 13:26:05.872784] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:02:26.633 passed 00:02:26.633 Test: bdev_io_abort ...passed 00:02:26.633 Test: bdev_unmap ...passed 00:02:26.633 Test: bdev_write_zeroes_split_test ...passed 00:02:26.633 Test: bdev_set_options_test ...passed 00:02:26.633 Test: bdev_get_memory_domains ...[2024-07-10 13:26:05.876022] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:02:26.633 passed 00:02:26.633 Test: bdev_io_ext ...passed 00:02:26.633 Test: bdev_io_ext_no_opts ...passed 00:02:26.633 Test: bdev_io_ext_invalid_opts ...passed 00:02:26.633 Test: bdev_io_ext_split ...passed 00:02:26.633 Test: bdev_io_ext_bounce_buffer ...passed 00:02:26.633 Test: bdev_register_uuid_alias ...[2024-07-10 13:26:05.881752] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name f6029fb8-3ebf-11ef-b9c4-5b09e08d4792 already exists 00:02:26.633 [2024-07-10 13:26:05.881783] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:f6029fb8-3ebf-11ef-b9c4-5b09e08d4792 alias for bdev bdev0 00:02:26.633 passed 00:02:26.633 Test: bdev_unregister_by_name ...passed 00:02:26.633 Test: for_each_bdev_test ...[2024-07-10 13:26:05.882031] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:02:26.633 [2024-07-10 13:26:05.882040] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7845:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:02:26.633 passed 00:02:26.633 Test: bdev_seek_test ...passed 00:02:26.633 Test: bdev_copy ...passed 00:02:26.633 Test: bdev_copy_split_test ...passed 00:02:26.633 Test: examine_locks ...passed 00:02:26.633 Test: claim_v2_rwo ...[2024-07-10 13:26:05.885089] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.633 [2024-07-10 13:26:05.885104] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.633 passed 00:02:26.633 Test: claim_v2_rom ...[2024-07-10 13:26:05.885112] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885118] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885124] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885139] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8566:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:02:26.634 [2024-07-10 13:26:05.885159] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:26.634 passed 00:02:26.634 Test: claim_v2_rwm ...passed 00:02:26.634 Test: claim_v2_existing_writer ...passed 00:02:26.634 Test: claim_v2_existing_v1 ...[2024-07-10 13:26:05.885166] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885173] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885179] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885187] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:02:26.634 [2024-07-10 13:26:05.885193] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8604:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:26.634 [2024-07-10 13:26:05.885208] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8639:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:02:26.634 [2024-07-10 13:26:05.885215] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885222] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885228] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885233] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885240] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885248] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8639:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:02:26.634 [2024-07-10 13:26:05.885263] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8604:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:26.634 [2024-07-10 13:26:05.885269] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8604:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:26.634 [2024-07-10 13:26:05.885284] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:26.634 passed 00:02:26.634 Test: claim_v1_existing_v2 ...passed 00:02:26.634 Test: examine_claimed ...passed 00:02:26.634 00:02:26.634 Run Summary: Type Total Ran Passed Failed Inactive 00:02:26.634 suites 1 1 n/a 0 0 00:02:26.634 tests 59 59 59 0 0 00:02:26.634 asserts 4599 4599 4599 0 n/a 00:02:26.634 00:02:26.634 Elapsed time = 0.055 seconds 00:02:26.634 [2024-07-10 13:26:05.885291] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885339] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885355] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885363] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885370] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:26.634 [2024-07-10 13:26:05.885406] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:02:26.634 13:26:05 -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:02:26.634 00:02:26.634 00:02:26.634 CUnit - A unit testing framework for C - Version 2.1-3 00:02:26.634 http://cunit.sourceforge.net/ 00:02:26.634 00:02:26.634 00:02:26.634 Suite: nvme 00:02:26.634 Test: test_create_ctrlr ...passed 00:02:26.634 Test: test_reset_ctrlr ...[2024-07-10 13:26:05.897531] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:02:26.634 Test: test_failover_ctrlr ...passed 00:02:26.634 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-10 13:26:05.898440] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.898497] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.898539] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_pending_reset ...[2024-07-10 13:26:05.898817] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.898887] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_attach_ctrlr ...[2024-07-10 13:26:05.899078] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:02:26.634 passed 00:02:26.634 Test: test_aer_cb ...passed 00:02:26.634 Test: test_submit_nvme_cmd ...passed 00:02:26.634 Test: test_add_remove_trid ...passed 00:02:26.634 Test: test_abort ...[2024-07-10 13:26:05.899647] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:02:26.634 passed 00:02:26.634 Test: test_get_io_qpair ...passed 00:02:26.634 Test: test_bdev_unregister ...passed 00:02:26.634 Test: test_compare_ns ...passed 00:02:26.634 Test: test_init_ana_log_page ...passed 00:02:26.634 Test: test_get_memory_domains ...passed 00:02:26.634 Test: test_reconnect_qpair ...[2024-07-10 13:26:05.900104] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_create_bdev_ctrlr ...[2024-07-10 13:26:05.900197] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:02:26.634 passed 00:02:26.634 Test: test_add_multi_ns_to_bdev ...[2024-07-10 13:26:05.900410] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:02:26.634 passed 00:02:26.634 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:02:26.634 Test: test_admin_path ...passed 00:02:26.634 Test: test_reset_bdev_ctrlr ...passed 00:02:26.634 Test: test_find_io_path ...passed 00:02:26.634 Test: test_retry_io_if_ana_state_is_updating ...passed 00:02:26.634 Test: test_retry_io_for_io_path_error ...passed 00:02:26.634 Test: test_retry_io_count ...passed 00:02:26.634 Test: test_concurrent_read_ana_log_page ...passed 00:02:26.634 Test: test_retry_io_for_ana_error ...passed 00:02:26.634 Test: test_check_io_error_resiliency_params ...[2024-07-10 13:26:05.901519] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:02:26.634 [2024-07-10 13:26:05.901549] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:02:26.634 [2024-07-10 13:26:05.901573] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:02:26.634 [2024-07-10 13:26:05.901593] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:02:26.634 [2024-07-10 13:26:05.901620] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:02:26.634 [2024-07-10 13:26:05.901664] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:02:26.634 [2024-07-10 13:26:05.901688] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:02:26.634 [2024-07-10 13:26:05.901704] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:02:26.634 [2024-07-10 13:26:05.901719] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:02:26.634 passed 00:02:26.634 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:02:26.634 Test: test_reconnect_ctrlr ...[2024-07-10 13:26:05.901895] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.901935] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902007] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902043] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902078] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_retry_failover_ctrlr ...[2024-07-10 13:26:05.902172] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_fail_path ...[2024-07-10 13:26:05.902284] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902333] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902369] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902402] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 [2024-07-10 13:26:05.902437] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.634 passed 00:02:26.634 Test: test_nvme_ns_cmp ...passed 00:02:26.634 Test: test_ana_transition ...passed 00:02:26.634 Test: test_set_preferred_path ...passed 00:02:26.634 Test: test_find_next_io_path ...passed 00:02:26.635 Test: test_find_io_path_min_qd ...passed 00:02:26.635 Test: test_disable_auto_failback ...[2024-07-10 13:26:05.902727] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.635 passed 00:02:26.635 Test: test_set_multipath_policy ...passed 00:02:26.635 Test: test_uuid_generation ...passed 00:02:26.635 Test: test_retry_io_to_same_path ...passed 00:02:26.635 Test: test_race_between_reset_and_disconnected ...passed 00:02:26.635 Test: test_ctrlr_op_rpc ...passed 00:02:26.635 Test: test_bdev_ctrlr_op_rpc ...passed 00:02:26.635 Test: test_disable_enable_ctrlr ...[2024-07-10 13:26:05.947239] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.635 passed 00:02:26.635 Test: test_delete_ctrlr_done ...passed 00:02:26.635 Test: test_ns_remove_during_reset ...[2024-07-10 13:26:05.947283] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:26.635 passed 00:02:26.635 00:02:26.635 Run Summary: Type Total Ran Passed Failed Inactive 00:02:26.635 suites 1 1 n/a 0 0 00:02:26.635 tests 48 48 48 0 0 00:02:26.635 asserts 3553 3553 3553 0 n/a 00:02:26.635 00:02:26.635 Elapsed time = 0.016 seconds 00:02:26.635 13:26:05 -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:02:26.635 Test Options 00:02:26.635 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:02:26.635 00:02:26.635 00:02:26.635 CUnit - A unit testing framework for C - Version 2.1-3 00:02:26.635 http://cunit.sourceforge.net/ 00:02:26.635 00:02:26.635 00:02:26.635 Suite: raid 00:02:26.635 Test: test_create_raid ...passed 00:02:26.635 Test: test_create_raid_superblock ...passed 00:02:26.635 Test: test_delete_raid ...passed 00:02:26.635 Test: test_create_raid_invalid_args ...[2024-07-10 13:26:05.960297] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:02:26.635 [2024-07-10 13:26:05.960760] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:02:26.635 [2024-07-10 13:26:05.960959] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:02:26.635 [2024-07-10 13:26:05.961042] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:02:26.635 [2024-07-10 13:26:05.961299] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:02:26.635 passed 00:02:26.635 Test: test_delete_raid_invalid_args ...passed 00:02:26.635 Test: test_io_channel ...passed 00:02:26.635 Test: test_reset_io ...passed 00:02:26.635 Test: test_write_io ...passed 00:02:26.635 Test: test_read_io ...passed 00:02:27.587 Test: test_unmap_io ...passed 00:02:27.587 Test: test_io_failure ...passed 00:02:27.587 Test: test_multi_raid_no_io ...[2024-07-10 13:26:06.910748] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:02:27.587 passed 00:02:27.587 Test: test_multi_raid_with_io ...passed 00:02:27.587 Test: test_io_type_supported ...passed 00:02:27.587 Test: test_raid_json_dump_info ...passed 00:02:27.587 Test: test_context_size ...passed 00:02:27.587 Test: test_raid_level_conversions ...passed 00:02:27.587 Test: test_raid_process ...passed 00:02:27.587 Test: test_raid_io_split ...passed 00:02:27.587 00:02:27.587 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.587 suites 1 1 n/a 0 0 00:02:27.587 tests 19 19 19 0 0 00:02:27.587 asserts 177879 177879 177879 0 n/a 00:02:27.587 00:02:27.587 Elapsed time = 0.953 seconds 00:02:27.587 13:26:06 -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:02:27.587 00:02:27.587 00:02:27.587 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.587 http://cunit.sourceforge.net/ 00:02:27.587 00:02:27.587 00:02:27.587 Suite: raid_sb 00:02:27.587 Test: test_raid_bdev_write_superblock ...passed 00:02:27.587 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:02:27.587 Test: test_raid_bdev_parse_superblock ...[2024-07-10 13:26:06.925590] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 121:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:02:27.587 passed 00:02:27.587 00:02:27.587 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.587 suites 1 1 n/a 0 0 00:02:27.587 tests 3 3 3 0 0 00:02:27.587 asserts 32 32 32 0 n/a 00:02:27.587 00:02:27.587 Elapsed time = 0.000 seconds 00:02:27.587 13:26:06 -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:02:27.587 00:02:27.587 00:02:27.587 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.587 http://cunit.sourceforge.net/ 00:02:27.587 00:02:27.587 00:02:27.587 Suite: concat 00:02:27.587 Test: test_concat_start ...passed 00:02:27.587 Test: test_concat_rw ...passed 00:02:27.587 Test: test_concat_null_payload ...passed 00:02:27.587 00:02:27.587 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.587 suites 1 1 n/a 0 0 00:02:27.587 tests 3 3 3 0 0 00:02:27.587 asserts 8097 8097 8097 0 n/a 00:02:27.587 00:02:27.587 Elapsed time = 0.000 seconds 00:02:27.587 13:26:06 -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:02:27.587 00:02:27.587 00:02:27.587 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.587 http://cunit.sourceforge.net/ 00:02:27.587 00:02:27.587 00:02:27.587 Suite: raid1 00:02:27.587 Test: test_raid1_start ...passed 00:02:27.587 Test: test_raid1_read_balancing ...passed 00:02:27.587 00:02:27.587 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.587 suites 1 1 n/a 0 0 00:02:27.587 tests 2 2 2 0 0 00:02:27.587 asserts 2856 2856 2856 0 n/a 00:02:27.587 00:02:27.587 Elapsed time = 0.000 seconds 00:02:27.587 13:26:06 -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:02:27.587 00:02:27.587 00:02:27.587 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.587 http://cunit.sourceforge.net/ 00:02:27.587 00:02:27.587 00:02:27.587 Suite: zone 00:02:27.587 Test: test_zone_get_operation ...passed 00:02:27.587 Test: test_bdev_zone_get_info ...passed 00:02:27.587 Test: test_bdev_zone_management ...passed 00:02:27.587 Test: test_bdev_zone_append ...passed 00:02:27.587 Test: test_bdev_zone_append_with_md ...passed 00:02:27.587 Test: test_bdev_zone_appendv ...passed 00:02:27.587 Test: test_bdev_zone_appendv_with_md ...passed 00:02:27.587 Test: test_bdev_io_get_append_location ...passed 00:02:27.587 00:02:27.587 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.587 suites 1 1 n/a 0 0 00:02:27.587 tests 8 8 8 0 0 00:02:27.587 asserts 94 94 94 0 n/a 00:02:27.587 00:02:27.587 Elapsed time = 0.000 seconds 00:02:27.849 13:26:06 -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:02:27.849 00:02:27.849 00:02:27.849 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.849 http://cunit.sourceforge.net/ 00:02:27.849 00:02:27.849 00:02:27.849 Suite: gpt_parse 00:02:27.849 Test: test_parse_mbr_and_primary ...[2024-07-10 13:26:06.958960] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:27.849 [2024-07-10 13:26:06.959421] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:27.849 [2024-07-10 13:26:06.959496] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:02:27.849 [2024-07-10 13:26:06.959519] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:02:27.849 [2024-07-10 13:26:06.959544] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:02:27.849 [2024-07-10 13:26:06.959564] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:02:27.849 passed 00:02:27.849 Test: test_parse_secondary ...[2024-07-10 13:26:06.959885] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:02:27.849 [2024-07-10 13:26:06.959905] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:02:27.849 [2024-07-10 13:26:06.959927] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:02:27.849 [2024-07-10 13:26:06.959945] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:02:27.849 passed 00:02:27.849 Test: test_check_mbr ...[2024-07-10 13:26:06.960272] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:27.849 [2024-07-10 13:26:06.960295] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:27.849 passed 00:02:27.849 Test: test_read_header ...passed 00:02:27.849 Test: test_read_partitions ...[2024-07-10 13:26:06.960324] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:02:27.849 [2024-07-10 13:26:06.960347] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:02:27.849 [2024-07-10 13:26:06.960368] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:02:27.849 [2024-07-10 13:26:06.960390] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:02:27.849 [2024-07-10 13:26:06.960412] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:02:27.849 [2024-07-10 13:26:06.960430] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:02:27.849 [2024-07-10 13:26:06.960469] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:02:27.849 [2024-07-10 13:26:06.960483] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:02:27.849 [2024-07-10 13:26:06.960496] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:02:27.849 [2024-07-10 13:26:06.960508] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:02:27.849 [2024-07-10 13:26:06.960612] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:02:27.849 passed 00:02:27.849 00:02:27.849 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.849 suites 1 1 n/a 0 0 00:02:27.849 tests 5 5 5 0 0 00:02:27.849 asserts 33 33 33 0 n/a 00:02:27.849 00:02:27.849 Elapsed time = 0.008 seconds 00:02:27.849 13:26:06 -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:02:27.849 00:02:27.849 00:02:27.849 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.849 http://cunit.sourceforge.net/ 00:02:27.849 00:02:27.849 00:02:27.849 Suite: bdev_part 00:02:27.849 Test: part_test ...[2024-07-10 13:26:06.971559] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:02:27.849 passed 00:02:27.849 Test: part_free_test ...passed 00:02:27.849 Test: part_get_io_channel_test ...passed 00:02:27.849 Test: part_construct_ext ...passed 00:02:27.849 00:02:27.849 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.849 suites 1 1 n/a 0 0 00:02:27.849 tests 4 4 4 0 0 00:02:27.849 asserts 48 48 48 0 n/a 00:02:27.849 00:02:27.849 Elapsed time = 0.008 seconds 00:02:27.849 13:26:06 -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:02:27.849 00:02:27.849 00:02:27.849 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.849 http://cunit.sourceforge.net/ 00:02:27.849 00:02:27.849 00:02:27.849 Suite: scsi_nvme_suite 00:02:27.849 Test: scsi_nvme_translate_test ...passed 00:02:27.849 00:02:27.849 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.849 suites 1 1 n/a 0 0 00:02:27.849 tests 1 1 1 0 0 00:02:27.849 asserts 104 104 104 0 n/a 00:02:27.849 00:02:27.849 Elapsed time = 0.000 seconds 00:02:27.849 13:26:06 -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:02:27.849 00:02:27.849 00:02:27.849 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.849 http://cunit.sourceforge.net/ 00:02:27.849 00:02:27.849 00:02:27.849 Suite: lvol 00:02:27.849 Test: ut_lvs_init ...[2024-07-10 13:26:06.989965] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:02:27.849 [2024-07-10 13:26:06.990382] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:02:27.849 passed 00:02:27.849 Test: ut_lvol_init ...passed 00:02:27.850 Test: ut_lvol_snapshot ...passed 00:02:27.850 Test: ut_lvol_clone ...passed 00:02:27.850 Test: ut_lvs_destroy ...passed 00:02:27.850 Test: ut_lvs_unload ...passed 00:02:27.850 Test: ut_lvol_resize ...[2024-07-10 13:26:06.990554] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:02:27.850 passed 00:02:27.850 Test: ut_lvol_set_read_only ...passed 00:02:27.850 Test: ut_lvol_hotremove ...passed 00:02:27.850 Test: ut_vbdev_lvol_get_io_channel ...passed 00:02:27.850 Test: ut_vbdev_lvol_io_type_supported ...passed 00:02:27.850 Test: ut_lvol_read_write ...passed 00:02:27.850 Test: ut_vbdev_lvol_submit_request ...passed 00:02:27.850 Test: ut_lvol_examine_config ...passed 00:02:27.850 Test: ut_lvol_examine_disk ...passed 00:02:27.850 Test: ut_lvol_rename ...[2024-07-10 13:26:06.990787] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:02:27.850 passed 00:02:27.850 Test: ut_bdev_finish ...passed 00:02:27.850 Test: ut_lvs_rename ...passed 00:02:27.850 Test: ut_lvol_seek ...passed 00:02:27.850 Test: ut_esnap_dev_create ...[2024-07-10 13:26:06.990894] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:02:27.850 [2024-07-10 13:26:06.990916] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:02:27.850 [2024-07-10 13:26:06.990989] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:02:27.850 [2024-07-10 13:26:06.991009] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:02:27.850 [2024-07-10 13:26:06.991029] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:02:27.850 [2024-07-10 13:26:06.991081] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1901:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:02:27.850 passed 00:02:27.850 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-10 13:26:06.991161] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:02:27.850 [2024-07-10 13:26:06.991205] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:02:27.850 passed 00:02:27.850 00:02:27.850 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.850 suites 1 1 n/a 0 0 00:02:27.850 tests 21 21 21 0 0 00:02:27.850 asserts 712 712 712 0 n/a 00:02:27.850 00:02:27.850 Elapsed time = 0.008 seconds 00:02:27.850 13:26:06 -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:02:27.850 00:02:27.850 00:02:27.850 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.850 http://cunit.sourceforge.net/ 00:02:27.850 00:02:27.850 00:02:27.850 Suite: zone_block 00:02:27.850 Test: test_zone_block_create ...passed 00:02:27.850 Test: test_zone_block_create_invalid ...[2024-07-10 13:26:07.008810] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:02:27.850 [2024-07-10 13:26:07.009138] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-10 13:26:07.009177] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:02:27.850 [2024-07-10 13:26:07.009192] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-10 13:26:07.009210] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:02:27.850 [2024-07-10 13:26:07.009224] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:02:27.850 Test: test_get_zone_info ...[2024-07-10 13:26:07.009237] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:02:27.850 [2024-07-10 13:26:07.009249] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-10 13:26:07.009346] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.009387] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_supported_io_types ...[2024-07-10 13:26:07.009405] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_reset_zone ...[2024-07-10 13:26:07.009480] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.009508] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_open_zone ...[2024-07-10 13:26:07.009552] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.009837] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_zone_write ...[2024-07-10 13:26:07.009861] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.009913] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:02:27.850 [2024-07-10 13:26:07.009934] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.009957] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:02:27.850 [2024-07-10 13:26:07.009969] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.010677] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:02:27.850 [2024-07-10 13:26:07.010737] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.010754] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:02:27.850 [2024-07-10 13:26:07.010767] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_zone_read ...[2024-07-10 13:26:07.011496] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:02:27.850 [2024-07-10 13:26:07.011525] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011578] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:02:27.850 [2024-07-10 13:26:07.011612] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011624] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:02:27.850 [2024-07-10 13:26:07.011632] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_close_zone ...[2024-07-10 13:26:07.011686] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:02:27.850 [2024-07-10 13:26:07.011701] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011732] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011752] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011795] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_finish_zone ...[2024-07-10 13:26:07.011813] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011877] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 Test: test_append_zone ...[2024-07-10 13:26:07.011890] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011919] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:02:27.850 [2024-07-10 13:26:07.011928] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 [2024-07-10 13:26:07.011939] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:02:27.850 [2024-07-10 13:26:07.011947] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 passed 00:02:27.850 00:02:27.850 [2024-07-10 13:26:07.012971] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:02:27.850 [2024-07-10 13:26:07.012994] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:27.850 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.850 suites 1 1 n/a 0 0 00:02:27.850 tests 11 11 11 0 0 00:02:27.850 asserts 3437 3437 3437 0 n/a 00:02:27.850 00:02:27.850 Elapsed time = 0.008 seconds 00:02:27.850 13:26:07 -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:02:27.850 00:02:27.850 00:02:27.850 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.850 http://cunit.sourceforge.net/ 00:02:27.850 00:02:27.850 00:02:27.850 Suite: bdev 00:02:27.850 Test: basic ...[2024-07-10 13:26:07.021333] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248c79): Operation not permitted (rc=-1) 00:02:27.850 [2024-07-10 13:26:07.021515] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82d1fb480 (0x248c70): Operation not permitted (rc=-1) 00:02:27.850 [2024-07-10 13:26:07.021527] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248c79): Operation not permitted (rc=-1) 00:02:27.850 passed 00:02:27.850 Test: unregister_and_close ...passed 00:02:27.850 Test: unregister_and_close_different_threads ...passed 00:02:27.851 Test: basic_qos ...passed 00:02:27.851 Test: put_channel_during_reset ...passed 00:02:27.851 Test: aborted_reset ...passed 00:02:27.851 Test: aborted_reset_no_outstanding_io ...passed 00:02:27.851 Test: io_during_reset ...passed 00:02:27.851 Test: reset_completions ...passed 00:02:27.851 Test: io_during_qos_queue ...passed 00:02:27.851 Test: io_during_qos_reset ...passed 00:02:27.851 Test: enomem ...passed 00:02:27.851 Test: enomem_multi_bdev ...passed 00:02:27.851 Test: enomem_multi_bdev_unregister ...passed 00:02:27.851 Test: enomem_multi_io_target ...passed 00:02:27.851 Test: qos_dynamic_enable ...passed 00:02:27.851 Test: bdev_histograms_mt ...passed 00:02:27.851 Test: bdev_set_io_timeout_mt ...[2024-07-10 13:26:07.048442] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x82d1fb600 not unregistered 00:02:27.851 passed 00:02:27.851 Test: lock_lba_range_then_submit_io ...[2024-07-10 13:26:07.049324] thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x248c58 already registered (old:0x82d1fb600 new:0x82d1fb780) 00:02:27.851 passed 00:02:27.851 Test: unregister_during_reset ...passed 00:02:27.851 Test: event_notify_and_close ...passed 00:02:27.851 Test: unregister_and_qos_poller ...passed 00:02:27.851 Suite: bdev_wrong_thread 00:02:27.851 Test: spdk_bdev_register_wt ...[2024-07-10 13:26:07.054076] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8365:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x82d1c4380 (0x82d1c4380) 00:02:27.851 passed 00:02:27.851 Test: spdk_bdev_examine_wt ...passed[2024-07-10 13:26:07.054118] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 794:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82d1c4380 (0x82d1c4380) 00:02:27.851 00:02:27.851 00:02:27.851 Run Summary: Type Total Ran Passed Failed Inactive 00:02:27.851 suites 2 2 n/a 0 0 00:02:27.851 tests 24 24 24 0 0 00:02:27.851 asserts 621 621 621 0 n/a 00:02:27.851 00:02:27.851 Elapsed time = 0.031 seconds 00:02:27.851 00:02:27.851 real 0m1.236s 00:02:27.851 user 0m1.027s 00:02:27.851 sys 0m0.199s 00:02:27.851 13:26:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:27.851 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:02:27.851 ************************************ 00:02:27.851 END TEST unittest_bdev 00:02:27.851 ************************************ 00:02:27.851 13:26:07 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:27.851 13:26:07 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:27.851 13:26:07 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:27.851 13:26:07 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:27.851 13:26:07 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:02:27.851 13:26:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:27.851 13:26:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:27.851 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:02:27.851 ************************************ 00:02:27.851 START TEST unittest_blob_blobfs 00:02:27.851 ************************************ 00:02:27.851 13:26:07 -- common/autotest_common.sh@1104 -- # unittest_blob 00:02:27.851 13:26:07 -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:02:27.851 13:26:07 -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:02:27.851 00:02:27.851 00:02:27.851 CUnit - A unit testing framework for C - Version 2.1-3 00:02:27.851 http://cunit.sourceforge.net/ 00:02:27.851 00:02:27.851 00:02:27.851 Suite: blob_nocopy_noextent 00:02:27.851 Test: blob_init ...[2024-07-10 13:26:07.118932] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:27.851 passed 00:02:27.851 Test: blob_thin_provision ...passed 00:02:27.851 Test: blob_read_only ...passed 00:02:27.851 Test: bs_load ...[2024-07-10 13:26:07.186999] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:27.851 passed 00:02:27.851 Test: bs_load_custom_cluster_size ...passed 00:02:27.851 Test: bs_load_after_failed_grow ...passed 00:02:27.851 Test: bs_cluster_sz ...[2024-07-10 13:26:07.206925] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:27.851 [2024-07-10 13:26:07.206989] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:27.851 [2024-07-10 13:26:07.207007] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:28.111 passed 00:02:28.111 Test: bs_resize_md ...passed 00:02:28.111 Test: bs_destroy ...passed 00:02:28.111 Test: bs_type ...passed 00:02:28.111 Test: bs_super_block ...passed 00:02:28.111 Test: bs_test_recover_cluster_count ...passed 00:02:28.111 Test: bs_grow_live ...passed 00:02:28.111 Test: bs_grow_live_no_space ...passed 00:02:28.111 Test: bs_test_grow ...passed 00:02:28.111 Test: blob_serialize_test ...passed 00:02:28.111 Test: super_block_crc ...passed 00:02:28.111 Test: blob_thin_prov_write_count_io ...passed 00:02:28.111 Test: bs_load_iter_test ...passed 00:02:28.111 Test: blob_relations ...[2024-07-10 13:26:07.324571] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.324651] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.111 [2024-07-10 13:26:07.324748] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.324756] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.111 passed 00:02:28.111 Test: blob_relations2 ...[2024-07-10 13:26:07.335724] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.335782] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.111 [2024-07-10 13:26:07.335790] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.335796] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.111 [2024-07-10 13:26:07.335906] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.335914] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.111 [2024-07-10 13:26:07.335945] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:28.111 [2024-07-10 13:26:07.335951] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.112 passed 00:02:28.112 Test: blob_relations3 ...passed 00:02:28.112 Test: blobstore_clean_power_failure ...passed 00:02:28.112 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:26:07.470961] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:28.372 [2024-07-10 13:26:07.480631] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:28.372 [2024-07-10 13:26:07.480685] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:28.372 [2024-07-10 13:26:07.480693] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.372 [2024-07-10 13:26:07.490315] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:28.372 [2024-07-10 13:26:07.490345] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:28.372 [2024-07-10 13:26:07.490352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:28.372 [2024-07-10 13:26:07.490358] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.372 [2024-07-10 13:26:07.499960] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:28.372 [2024-07-10 13:26:07.499989] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.372 [2024-07-10 13:26:07.509569] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:28.372 [2024-07-10 13:26:07.509596] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.372 [2024-07-10 13:26:07.519256] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:28.372 [2024-07-10 13:26:07.519290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:28.372 passed 00:02:28.372 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:26:07.548021] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:28.372 [2024-07-10 13:26:07.567381] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:28.372 [2024-07-10 13:26:07.577047] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:28.372 passed 00:02:28.372 Test: blob_io_unit ...passed 00:02:28.372 Test: blob_io_unit_compatibility ...passed 00:02:28.372 Test: blob_ext_md_pages ...passed 00:02:28.372 Test: blob_esnap_io_4096_4096 ...passed 00:02:28.372 Test: blob_esnap_io_512_512 ...passed 00:02:28.372 Test: blob_esnap_io_4096_512 ...passed 00:02:28.372 Test: blob_esnap_io_512_4096 ...passed 00:02:28.372 Suite: blob_bs_nocopy_noextent 00:02:28.631 Test: blob_open ...passed 00:02:28.631 Test: blob_create ...[2024-07-10 13:26:07.759632] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:28.631 passed 00:02:28.631 Test: blob_create_loop ...passed 00:02:28.631 Test: blob_create_fail ...[2024-07-10 13:26:07.828461] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:28.631 passed 00:02:28.631 Test: blob_create_internal ...passed 00:02:28.631 Test: blob_create_zero_extent ...passed 00:02:28.631 Test: blob_snapshot ...passed 00:02:28.631 Test: blob_clone ...passed 00:02:28.632 Test: blob_inflate ...[2024-07-10 13:26:07.977839] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:28.632 passed 00:02:28.891 Test: blob_delete ...passed 00:02:28.891 Test: blob_resize_test ...[2024-07-10 13:26:08.034942] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:28.891 passed 00:02:28.891 Test: channel_ops ...passed 00:02:28.891 Test: blob_super ...passed 00:02:28.891 Test: blob_rw_verify_iov ...passed 00:02:28.891 Test: blob_unmap ...passed 00:02:28.891 Test: blob_iter ...passed 00:02:28.891 Test: blob_parse_md ...passed 00:02:28.891 Test: bs_load_pending_removal ...passed 00:02:29.151 Test: bs_unload ...[2024-07-10 13:26:08.263478] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:02:29.151 passed 00:02:29.151 Test: bs_usable_clusters ...passed 00:02:29.151 Test: blob_crc ...[2024-07-10 13:26:08.319918] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:29.151 [2024-07-10 13:26:08.319972] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:29.151 passed 00:02:29.151 Test: blob_flags ...passed 00:02:29.151 Test: bs_version ...passed 00:02:29.151 Test: blob_set_xattrs_test ...[2024-07-10 13:26:08.405151] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:29.151 [2024-07-10 13:26:08.405214] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:29.151 passed 00:02:29.151 Test: blob_thin_prov_alloc ...passed 00:02:29.151 Test: blob_insert_cluster_msg_test ...passed 00:02:29.151 Test: blob_thin_prov_rw ...passed 00:02:29.410 Test: blob_thin_prov_rle ...passed 00:02:29.410 Test: blob_thin_prov_rw_iov ...passed 00:02:29.410 Test: blob_snapshot_rw ...passed 00:02:29.410 Test: blob_snapshot_rw_iov ...passed 00:02:29.410 Test: blob_inflate_rw ...passed 00:02:29.410 Test: blob_snapshot_freeze_io ...passed 00:02:29.670 Test: blob_operation_split_rw ...passed 00:02:29.670 Test: blob_operation_split_rw_iov ...passed 00:02:29.670 Test: blob_simultaneous_operations ...[2024-07-10 13:26:08.857531] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:29.670 [2024-07-10 13:26:08.857593] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:29.670 [2024-07-10 13:26:08.857856] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:29.670 [2024-07-10 13:26:08.857874] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:29.670 [2024-07-10 13:26:08.861035] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:29.670 [2024-07-10 13:26:08.861064] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:29.670 [2024-07-10 13:26:08.861086] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:29.670 [2024-07-10 13:26:08.861095] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:29.670 passed 00:02:29.670 Test: blob_persist_test ...passed 00:02:29.670 Test: blob_decouple_snapshot ...passed 00:02:29.670 Test: blob_seek_io_unit ...passed 00:02:29.670 Test: blob_nested_freezes ...passed 00:02:29.670 Suite: blob_blob_nocopy_noextent 00:02:29.670 Test: blob_write ...passed 00:02:29.929 Test: blob_read ...passed 00:02:29.929 Test: blob_rw_verify ...passed 00:02:29.929 Test: blob_rw_verify_iov_nomem ...passed 00:02:29.929 Test: blob_rw_iov_read_only ...passed 00:02:29.929 Test: blob_xattr ...passed 00:02:29.929 Test: blob_dirty_shutdown ...passed 00:02:29.929 Test: blob_is_degraded ...passed 00:02:29.929 Suite: blob_esnap_bs_nocopy_noextent 00:02:29.929 Test: blob_esnap_create ...passed 00:02:30.188 Test: blob_esnap_thread_add_remove ...passed 00:02:30.188 Test: blob_esnap_clone_snapshot ...passed 00:02:30.188 Test: blob_esnap_clone_inflate ...passed 00:02:30.188 Test: blob_esnap_clone_decouple ...passed 00:02:30.188 Test: blob_esnap_clone_reload ...passed 00:02:30.188 Test: blob_esnap_hotplug ...passed 00:02:30.188 Suite: blob_nocopy_extent 00:02:30.188 Test: blob_init ...[2024-07-10 13:26:09.434882] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:30.188 passed 00:02:30.188 Test: blob_thin_provision ...passed 00:02:30.188 Test: blob_read_only ...passed 00:02:30.188 Test: bs_load ...passed 00:02:30.188 Test: bs_load_custom_cluster_size ...[2024-07-10 13:26:09.472956] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:30.188 passed 00:02:30.188 Test: bs_load_after_failed_grow ...passed 00:02:30.188 Test: bs_cluster_sz ...[2024-07-10 13:26:09.492486] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:30.188 [2024-07-10 13:26:09.492546] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:30.188 [2024-07-10 13:26:09.492557] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:30.188 passed 00:02:30.188 Test: bs_resize_md ...passed 00:02:30.188 Test: bs_destroy ...passed 00:02:30.188 Test: bs_type ...passed 00:02:30.188 Test: bs_super_block ...passed 00:02:30.188 Test: bs_test_recover_cluster_count ...passed 00:02:30.188 Test: bs_grow_live ...passed 00:02:30.188 Test: bs_grow_live_no_space ...passed 00:02:30.448 Test: bs_test_grow ...passed 00:02:30.448 Test: blob_serialize_test ...passed 00:02:30.448 Test: super_block_crc ...passed 00:02:30.448 Test: blob_thin_prov_write_count_io ...passed 00:02:30.448 Test: bs_load_iter_test ...passed 00:02:30.448 Test: blob_relations ...[2024-07-10 13:26:09.607290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.607352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.607429] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.607437] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 passed 00:02:30.448 Test: blob_relations2 ...[2024-07-10 13:26:09.618131] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.618178] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.618186] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.618192] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.618301] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.618309] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.618344] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:30.448 [2024-07-10 13:26:09.618350] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 passed 00:02:30.448 Test: blob_relations3 ...passed 00:02:30.448 Test: blobstore_clean_power_failure ...passed 00:02:30.448 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:26:09.752359] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:30.448 [2024-07-10 13:26:09.761973] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:30.448 [2024-07-10 13:26:09.771609] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:30.448 [2024-07-10 13:26:09.771656] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:30.448 [2024-07-10 13:26:09.771665] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.781224] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:30.448 [2024-07-10 13:26:09.781269] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:30.448 [2024-07-10 13:26:09.781276] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:30.448 [2024-07-10 13:26:09.781283] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.790967] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:30.448 [2024-07-10 13:26:09.791007] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:30.448 [2024-07-10 13:26:09.791014] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:30.448 [2024-07-10 13:26:09.791021] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.800719] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:30.448 [2024-07-10 13:26:09.800760] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.448 [2024-07-10 13:26:09.810435] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:30.448 [2024-07-10 13:26:09.810504] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.708 [2024-07-10 13:26:09.820171] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:30.708 [2024-07-10 13:26:09.820222] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:30.708 passed 00:02:30.708 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:26:09.848896] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:30.708 [2024-07-10 13:26:09.858499] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:30.708 [2024-07-10 13:26:09.877527] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:30.708 [2024-07-10 13:26:09.887122] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:30.708 passed 00:02:30.708 Test: blob_io_unit ...passed 00:02:30.708 Test: blob_io_unit_compatibility ...passed 00:02:30.708 Test: blob_ext_md_pages ...passed 00:02:30.708 Test: blob_esnap_io_4096_4096 ...passed 00:02:30.708 Test: blob_esnap_io_512_512 ...passed 00:02:30.708 Test: blob_esnap_io_4096_512 ...passed 00:02:30.708 Test: blob_esnap_io_512_4096 ...passed 00:02:30.708 Suite: blob_bs_nocopy_extent 00:02:30.708 Test: blob_open ...passed 00:02:30.708 Test: blob_create ...[2024-07-10 13:26:10.069816] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:30.968 passed 00:02:30.968 Test: blob_create_loop ...passed 00:02:30.968 Test: blob_create_fail ...[2024-07-10 13:26:10.138685] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:30.968 passed 00:02:30.968 Test: blob_create_internal ...passed 00:02:30.968 Test: blob_create_zero_extent ...passed 00:02:30.968 Test: blob_snapshot ...passed 00:02:30.968 Test: blob_clone ...passed 00:02:30.968 Test: blob_inflate ...[2024-07-10 13:26:10.286750] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:30.968 passed 00:02:30.968 Test: blob_delete ...passed 00:02:31.227 Test: blob_resize_test ...[2024-07-10 13:26:10.343777] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:31.227 passed 00:02:31.227 Test: channel_ops ...passed 00:02:31.227 Test: blob_super ...passed 00:02:31.227 Test: blob_rw_verify_iov ...passed 00:02:31.228 Test: blob_unmap ...passed 00:02:31.228 Test: blob_iter ...passed 00:02:31.228 Test: blob_parse_md ...passed 00:02:31.228 Test: bs_load_pending_removal ...passed 00:02:31.228 Test: bs_unload ...[2024-07-10 13:26:10.570805] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:02:31.228 passed 00:02:31.487 Test: bs_usable_clusters ...passed 00:02:31.487 Test: blob_crc ...[2024-07-10 13:26:10.626958] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:31.487 [2024-07-10 13:26:10.627032] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:31.487 passed 00:02:31.487 Test: blob_flags ...passed 00:02:31.487 Test: bs_version ...passed 00:02:31.487 Test: blob_set_xattrs_test ...[2024-07-10 13:26:10.712540] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:31.487 [2024-07-10 13:26:10.712594] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:31.487 passed 00:02:31.487 Test: blob_thin_prov_alloc ...passed 00:02:31.487 Test: blob_insert_cluster_msg_test ...passed 00:02:31.487 Test: blob_thin_prov_rw ...passed 00:02:31.487 Test: blob_thin_prov_rle ...passed 00:02:31.747 Test: blob_thin_prov_rw_iov ...passed 00:02:31.747 Test: blob_snapshot_rw ...passed 00:02:31.747 Test: blob_snapshot_rw_iov ...passed 00:02:31.747 Test: blob_inflate_rw ...passed 00:02:31.747 Test: blob_snapshot_freeze_io ...passed 00:02:31.747 Test: blob_operation_split_rw ...passed 00:02:32.006 Test: blob_operation_split_rw_iov ...passed 00:02:32.006 Test: blob_simultaneous_operations ...[2024-07-10 13:26:11.152944] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:32.006 [2024-07-10 13:26:11.153007] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.006 [2024-07-10 13:26:11.153267] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:32.006 [2024-07-10 13:26:11.153283] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.006 [2024-07-10 13:26:11.156439] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:32.006 [2024-07-10 13:26:11.156466] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.006 [2024-07-10 13:26:11.156501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:32.006 [2024-07-10 13:26:11.156508] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.006 passed 00:02:32.006 Test: blob_persist_test ...passed 00:02:32.006 Test: blob_decouple_snapshot ...passed 00:02:32.006 Test: blob_seek_io_unit ...passed 00:02:32.006 Test: blob_nested_freezes ...passed 00:02:32.006 Suite: blob_blob_nocopy_extent 00:02:32.006 Test: blob_write ...passed 00:02:32.006 Test: blob_read ...passed 00:02:32.266 Test: blob_rw_verify ...passed 00:02:32.266 Test: blob_rw_verify_iov_nomem ...passed 00:02:32.266 Test: blob_rw_iov_read_only ...passed 00:02:32.266 Test: blob_xattr ...passed 00:02:32.266 Test: blob_dirty_shutdown ...passed 00:02:32.266 Test: blob_is_degraded ...passed 00:02:32.266 Suite: blob_esnap_bs_nocopy_extent 00:02:32.266 Test: blob_esnap_create ...passed 00:02:32.266 Test: blob_esnap_thread_add_remove ...passed 00:02:32.266 Test: blob_esnap_clone_snapshot ...passed 00:02:32.525 Test: blob_esnap_clone_inflate ...passed 00:02:32.525 Test: blob_esnap_clone_decouple ...passed 00:02:32.525 Test: blob_esnap_clone_reload ...passed 00:02:32.525 Test: blob_esnap_hotplug ...passed 00:02:32.525 Suite: blob_copy_noextent 00:02:32.525 Test: blob_init ...[2024-07-10 13:26:11.727690] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:32.525 passed 00:02:32.525 Test: blob_thin_provision ...passed 00:02:32.525 Test: blob_read_only ...passed 00:02:32.525 Test: bs_load ...passed 00:02:32.525 Test: bs_load_custom_cluster_size ...[2024-07-10 13:26:11.765564] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:32.525 passed 00:02:32.525 Test: bs_load_after_failed_grow ...passed 00:02:32.525 Test: bs_cluster_sz ...[2024-07-10 13:26:11.784661] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:32.525 [2024-07-10 13:26:11.784710] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:32.525 [2024-07-10 13:26:11.784720] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:32.525 passed 00:02:32.525 Test: bs_resize_md ...passed 00:02:32.525 Test: bs_destroy ...passed 00:02:32.525 Test: bs_type ...passed 00:02:32.525 Test: bs_super_block ...passed 00:02:32.525 Test: bs_test_recover_cluster_count ...passed 00:02:32.525 Test: bs_grow_live ...passed 00:02:32.525 Test: bs_grow_live_no_space ...passed 00:02:32.525 Test: bs_test_grow ...passed 00:02:32.525 Test: blob_serialize_test ...passed 00:02:32.525 Test: super_block_crc ...passed 00:02:32.525 Test: blob_thin_prov_write_count_io ...passed 00:02:32.785 Test: bs_load_iter_test ...passed 00:02:32.785 Test: blob_relations ...[2024-07-10 13:26:11.900002] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.900058] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 [2024-07-10 13:26:11.900138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.900145] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 passed 00:02:32.785 Test: blob_relations2 ...[2024-07-10 13:26:11.910801] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.910827] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 [2024-07-10 13:26:11.910835] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.910841] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 [2024-07-10 13:26:11.910927] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.910935] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 [2024-07-10 13:26:11.910968] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:32.785 [2024-07-10 13:26:11.910974] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 passed 00:02:32.785 Test: blob_relations3 ...passed 00:02:32.785 Test: blobstore_clean_power_failure ...passed 00:02:32.785 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:26:12.044288] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:32.785 [2024-07-10 13:26:12.053898] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:32.785 [2024-07-10 13:26:12.053947] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:32.785 [2024-07-10 13:26:12.053954] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.785 [2024-07-10 13:26:12.063509] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:32.786 [2024-07-10 13:26:12.063534] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:32.786 [2024-07-10 13:26:12.063541] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:32.786 [2024-07-10 13:26:12.063548] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.786 [2024-07-10 13:26:12.073118] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:32.786 [2024-07-10 13:26:12.073145] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.786 [2024-07-10 13:26:12.082769] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:32.786 [2024-07-10 13:26:12.082821] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.786 [2024-07-10 13:26:12.092439] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:32.786 [2024-07-10 13:26:12.092475] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:32.786 passed 00:02:32.786 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:26:12.121114] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:32.786 [2024-07-10 13:26:12.140050] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:33.045 [2024-07-10 13:26:12.149664] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:33.045 passed 00:02:33.045 Test: blob_io_unit ...passed 00:02:33.045 Test: blob_io_unit_compatibility ...passed 00:02:33.045 Test: blob_ext_md_pages ...passed 00:02:33.045 Test: blob_esnap_io_4096_4096 ...passed 00:02:33.045 Test: blob_esnap_io_512_512 ...passed 00:02:33.045 Test: blob_esnap_io_4096_512 ...passed 00:02:33.045 Test: blob_esnap_io_512_4096 ...passed 00:02:33.045 Suite: blob_bs_copy_noextent 00:02:33.045 Test: blob_open ...passed 00:02:33.045 Test: blob_create ...[2024-07-10 13:26:12.330509] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:33.045 passed 00:02:33.045 Test: blob_create_loop ...passed 00:02:33.045 Test: blob_create_fail ...[2024-07-10 13:26:12.398970] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:33.303 passed 00:02:33.303 Test: blob_create_internal ...passed 00:02:33.303 Test: blob_create_zero_extent ...passed 00:02:33.303 Test: blob_snapshot ...passed 00:02:33.303 Test: blob_clone ...passed 00:02:33.303 Test: blob_inflate ...[2024-07-10 13:26:12.545073] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:33.303 passed 00:02:33.303 Test: blob_delete ...passed 00:02:33.303 Test: blob_resize_test ...[2024-07-10 13:26:12.601383] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:33.303 passed 00:02:33.303 Test: channel_ops ...passed 00:02:33.562 Test: blob_super ...passed 00:02:33.563 Test: blob_rw_verify_iov ...passed 00:02:33.563 Test: blob_unmap ...passed 00:02:33.563 Test: blob_iter ...passed 00:02:33.563 Test: blob_parse_md ...passed 00:02:33.563 Test: bs_load_pending_removal ...passed 00:02:33.563 Test: bs_unload ...[2024-07-10 13:26:12.827385] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:02:33.563 passed 00:02:33.563 Test: bs_usable_clusters ...passed 00:02:33.563 Test: blob_crc ...[2024-07-10 13:26:12.883625] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:33.563 [2024-07-10 13:26:12.883676] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:33.563 passed 00:02:33.563 Test: blob_flags ...passed 00:02:33.821 Test: bs_version ...passed 00:02:33.821 Test: blob_set_xattrs_test ...[2024-07-10 13:26:12.969016] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:33.821 [2024-07-10 13:26:12.969067] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:33.821 passed 00:02:33.821 Test: blob_thin_prov_alloc ...passed 00:02:33.821 Test: blob_insert_cluster_msg_test ...passed 00:02:33.821 Test: blob_thin_prov_rw ...passed 00:02:33.821 Test: blob_thin_prov_rle ...passed 00:02:33.821 Test: blob_thin_prov_rw_iov ...passed 00:02:33.821 Test: blob_snapshot_rw ...passed 00:02:34.079 Test: blob_snapshot_rw_iov ...passed 00:02:34.079 Test: blob_inflate_rw ...passed 00:02:34.079 Test: blob_snapshot_freeze_io ...passed 00:02:34.079 Test: blob_operation_split_rw ...passed 00:02:34.079 Test: blob_operation_split_rw_iov ...passed 00:02:34.079 Test: blob_simultaneous_operations ...[2024-07-10 13:26:13.403911] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:34.079 [2024-07-10 13:26:13.403971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.079 [2024-07-10 13:26:13.404215] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:34.079 [2024-07-10 13:26:13.404237] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.079 [2024-07-10 13:26:13.406322] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:34.079 [2024-07-10 13:26:13.406355] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.079 [2024-07-10 13:26:13.406371] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:34.079 [2024-07-10 13:26:13.406377] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.079 passed 00:02:34.338 Test: blob_persist_test ...passed 00:02:34.338 Test: blob_decouple_snapshot ...passed 00:02:34.338 Test: blob_seek_io_unit ...passed 00:02:34.338 Test: blob_nested_freezes ...passed 00:02:34.338 Suite: blob_blob_copy_noextent 00:02:34.338 Test: blob_write ...passed 00:02:34.338 Test: blob_read ...passed 00:02:34.338 Test: blob_rw_verify ...passed 00:02:34.338 Test: blob_rw_verify_iov_nomem ...passed 00:02:34.338 Test: blob_rw_iov_read_only ...passed 00:02:34.597 Test: blob_xattr ...passed 00:02:34.597 Test: blob_dirty_shutdown ...passed 00:02:34.597 Test: blob_is_degraded ...passed 00:02:34.597 Suite: blob_esnap_bs_copy_noextent 00:02:34.597 Test: blob_esnap_create ...passed 00:02:34.597 Test: blob_esnap_thread_add_remove ...passed 00:02:34.597 Test: blob_esnap_clone_snapshot ...passed 00:02:34.597 Test: blob_esnap_clone_inflate ...passed 00:02:34.597 Test: blob_esnap_clone_decouple ...passed 00:02:34.597 Test: blob_esnap_clone_reload ...passed 00:02:34.856 Test: blob_esnap_hotplug ...passed 00:02:34.856 Suite: blob_copy_extent 00:02:34.856 Test: blob_init ...[2024-07-10 13:26:13.970051] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:34.856 passed 00:02:34.856 Test: blob_thin_provision ...passed 00:02:34.856 Test: blob_read_only ...passed 00:02:34.856 Test: bs_load ...passed 00:02:34.856 Test: bs_load_custom_cluster_size ...[2024-07-10 13:26:14.007783] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:34.856 passed 00:02:34.856 Test: bs_load_after_failed_grow ...passed 00:02:34.856 Test: bs_cluster_sz ...[2024-07-10 13:26:14.026922] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:34.856 [2024-07-10 13:26:14.026970] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:34.856 [2024-07-10 13:26:14.026980] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:34.856 passed 00:02:34.856 Test: bs_resize_md ...passed 00:02:34.856 Test: bs_destroy ...passed 00:02:34.856 Test: bs_type ...passed 00:02:34.856 Test: bs_super_block ...passed 00:02:34.856 Test: bs_test_recover_cluster_count ...passed 00:02:34.856 Test: bs_grow_live ...passed 00:02:34.856 Test: bs_grow_live_no_space ...passed 00:02:34.856 Test: bs_test_grow ...passed 00:02:34.856 Test: blob_serialize_test ...passed 00:02:34.856 Test: super_block_crc ...passed 00:02:34.856 Test: blob_thin_prov_write_count_io ...passed 00:02:34.856 Test: bs_load_iter_test ...passed 00:02:34.856 Test: blob_relations ...[2024-07-10 13:26:14.140914] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.140967] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 [2024-07-10 13:26:14.141024] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.141030] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 passed 00:02:34.856 Test: blob_relations2 ...[2024-07-10 13:26:14.151033] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.151077] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 [2024-07-10 13:26:14.151085] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.151091] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 [2024-07-10 13:26:14.151184] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.151192] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 [2024-07-10 13:26:14.151222] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:34.856 [2024-07-10 13:26:14.151245] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:34.856 passed 00:02:34.856 Test: blob_relations3 ...passed 00:02:35.115 Test: blobstore_clean_power_failure ...passed 00:02:35.115 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:26:14.283208] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:35.115 [2024-07-10 13:26:14.292752] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:35.115 [2024-07-10 13:26:14.302318] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:35.115 [2024-07-10 13:26:14.302363] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:35.115 [2024-07-10 13:26:14.302370] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 [2024-07-10 13:26:14.311930] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:35.115 [2024-07-10 13:26:14.311957] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:35.115 [2024-07-10 13:26:14.311964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:35.115 [2024-07-10 13:26:14.311971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 [2024-07-10 13:26:14.321547] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:35.115 [2024-07-10 13:26:14.321577] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:35.115 [2024-07-10 13:26:14.321584] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:35.115 [2024-07-10 13:26:14.321590] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 [2024-07-10 13:26:14.331166] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:35.115 [2024-07-10 13:26:14.331201] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 [2024-07-10 13:26:14.340741] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:35.115 [2024-07-10 13:26:14.340798] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 [2024-07-10 13:26:14.350345] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:35.115 [2024-07-10 13:26:14.350390] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:35.115 passed 00:02:35.115 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:26:14.378853] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:35.115 [2024-07-10 13:26:14.388362] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:35.115 [2024-07-10 13:26:14.407251] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:35.115 [2024-07-10 13:26:14.416785] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:35.115 passed 00:02:35.115 Test: blob_io_unit ...passed 00:02:35.115 Test: blob_io_unit_compatibility ...passed 00:02:35.115 Test: blob_ext_md_pages ...passed 00:02:35.374 Test: blob_esnap_io_4096_4096 ...passed 00:02:35.374 Test: blob_esnap_io_512_512 ...passed 00:02:35.374 Test: blob_esnap_io_4096_512 ...passed 00:02:35.374 Test: blob_esnap_io_512_4096 ...passed 00:02:35.374 Suite: blob_bs_copy_extent 00:02:35.374 Test: blob_open ...passed 00:02:35.374 Test: blob_create ...[2024-07-10 13:26:14.599270] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:35.374 passed 00:02:35.374 Test: blob_create_loop ...passed 00:02:35.374 Test: blob_create_fail ...[2024-07-10 13:26:14.668587] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:35.374 passed 00:02:35.374 Test: blob_create_internal ...passed 00:02:35.374 Test: blob_create_zero_extent ...passed 00:02:35.634 Test: blob_snapshot ...passed 00:02:35.634 Test: blob_clone ...passed 00:02:35.634 Test: blob_inflate ...[2024-07-10 13:26:14.815755] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:35.634 passed 00:02:35.634 Test: blob_delete ...passed 00:02:35.634 Test: blob_resize_test ...[2024-07-10 13:26:14.872657] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:35.634 passed 00:02:35.634 Test: channel_ops ...passed 00:02:35.634 Test: blob_super ...passed 00:02:35.634 Test: blob_rw_verify_iov ...passed 00:02:35.634 Test: blob_unmap ...passed 00:02:35.893 Test: blob_iter ...passed 00:02:35.893 Test: blob_parse_md ...passed 00:02:35.893 Test: bs_load_pending_removal ...passed 00:02:35.893 Test: bs_unload ...[2024-07-10 13:26:15.099354] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:02:35.893 passed 00:02:35.893 Test: bs_usable_clusters ...passed 00:02:35.893 Test: blob_crc ...[2024-07-10 13:26:15.156012] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:35.893 [2024-07-10 13:26:15.156065] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:35.893 passed 00:02:35.893 Test: blob_flags ...passed 00:02:35.893 Test: bs_version ...passed 00:02:35.893 Test: blob_set_xattrs_test ...[2024-07-10 13:26:15.241580] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:35.893 [2024-07-10 13:26:15.241635] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:35.893 passed 00:02:36.152 Test: blob_thin_prov_alloc ...passed 00:02:36.152 Test: blob_insert_cluster_msg_test ...passed 00:02:36.152 Test: blob_thin_prov_rw ...passed 00:02:36.152 Test: blob_thin_prov_rle ...passed 00:02:36.152 Test: blob_thin_prov_rw_iov ...passed 00:02:36.152 Test: blob_snapshot_rw ...passed 00:02:36.152 Test: blob_snapshot_rw_iov ...passed 00:02:36.411 Test: blob_inflate_rw ...passed 00:02:36.411 Test: blob_snapshot_freeze_io ...passed 00:02:36.411 Test: blob_operation_split_rw ...passed 00:02:36.411 Test: blob_operation_split_rw_iov ...passed 00:02:36.411 Test: blob_simultaneous_operations ...[2024-07-10 13:26:15.679019] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:36.411 [2024-07-10 13:26:15.679089] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:36.411 [2024-07-10 13:26:15.679333] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:36.411 [2024-07-10 13:26:15.679352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:36.411 [2024-07-10 13:26:15.681319] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:36.411 [2024-07-10 13:26:15.681345] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:36.411 [2024-07-10 13:26:15.681364] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:36.411 [2024-07-10 13:26:15.681371] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:36.411 passed 00:02:36.411 Test: blob_persist_test ...passed 00:02:36.411 Test: blob_decouple_snapshot ...passed 00:02:36.670 Test: blob_seek_io_unit ...passed 00:02:36.670 Test: blob_nested_freezes ...passed 00:02:36.670 Suite: blob_blob_copy_extent 00:02:36.670 Test: blob_write ...passed 00:02:36.670 Test: blob_read ...passed 00:02:36.670 Test: blob_rw_verify ...passed 00:02:36.670 Test: blob_rw_verify_iov_nomem ...passed 00:02:36.670 Test: blob_rw_iov_read_only ...passed 00:02:36.670 Test: blob_xattr ...passed 00:02:36.670 Test: blob_dirty_shutdown ...passed 00:02:36.929 Test: blob_is_degraded ...passed 00:02:36.929 Suite: blob_esnap_bs_copy_extent 00:02:36.929 Test: blob_esnap_create ...passed 00:02:36.929 Test: blob_esnap_thread_add_remove ...passed 00:02:36.929 Test: blob_esnap_clone_snapshot ...passed 00:02:36.929 Test: blob_esnap_clone_inflate ...passed 00:02:36.929 Test: blob_esnap_clone_decouple ...passed 00:02:36.929 Test: blob_esnap_clone_reload ...passed 00:02:36.929 Test: blob_esnap_hotplug ...passed 00:02:36.929 00:02:36.929 Run Summary: Type Total Ran Passed Failed Inactive 00:02:36.929 suites 16 16 n/a 0 0 00:02:36.929 tests 348 348 348 0 0 00:02:36.929 asserts 92605 92605 92605 0 n/a 00:02:36.929 00:02:36.929 Elapsed time = 9.141 seconds 00:02:36.929 13:26:16 -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:02:36.929 00:02:36.929 00:02:36.929 CUnit - A unit testing framework for C - Version 2.1-3 00:02:36.930 http://cunit.sourceforge.net/ 00:02:36.930 00:02:36.930 00:02:36.930 Suite: blob_bdev 00:02:36.930 Test: create_bs_dev ...passed 00:02:36.930 Test: create_bs_dev_ro ...[2024-07-10 13:26:16.265277] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:02:36.930 passed 00:02:36.930 Test: create_bs_dev_rw ...passed 00:02:36.930 Test: claim_bs_dev ...passed 00:02:36.930 Test: claim_bs_dev_ro ...passed 00:02:36.930 Test: deferred_destroy_refs ...passed 00:02:36.930 Test: deferred_destroy_channels ...passed 00:02:36.930 Test: deferred_destroy_threads ...passed 00:02:36.930 00:02:36.930 [2024-07-10 13:26:16.265729] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:02:36.930 Run Summary: Type Total Ran Passed Failed Inactive 00:02:36.930 suites 1 1 n/a 0 0 00:02:36.930 tests 8 8 8 0 0 00:02:36.930 asserts 119 119 119 0 n/a 00:02:36.930 00:02:36.930 Elapsed time = 0.000 seconds 00:02:36.930 13:26:16 -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:02:36.930 00:02:36.930 00:02:36.930 CUnit - A unit testing framework for C - Version 2.1-3 00:02:36.930 http://cunit.sourceforge.net/ 00:02:36.930 00:02:36.930 00:02:36.930 Suite: tree 00:02:36.930 Test: blobfs_tree_op_test ...passed 00:02:36.930 00:02:36.930 Run Summary: Type Total Ran Passed Failed Inactive 00:02:36.930 suites 1 1 n/a 0 0 00:02:36.930 tests 1 1 1 0 0 00:02:36.930 asserts 27 27 27 0 n/a 00:02:36.930 00:02:36.930 Elapsed time = 0.000 seconds 00:02:36.930 13:26:16 -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:02:36.930 00:02:36.930 00:02:36.930 CUnit - A unit testing framework for C - Version 2.1-3 00:02:36.930 http://cunit.sourceforge.net/ 00:02:36.930 00:02:36.930 00:02:36.930 Suite: blobfs_async_ut 00:02:37.188 Test: fs_init ...passed 00:02:37.188 Test: fs_open ...passed 00:02:37.188 Test: fs_create ...passed 00:02:37.188 Test: fs_truncate ...passed 00:02:37.188 Test: fs_rename ...[2024-07-10 13:26:16.371571] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:02:37.188 passed 00:02:37.188 Test: fs_rw_async ...passed 00:02:37.188 Test: fs_writev_readv_async ...passed 00:02:37.188 Test: tree_find_buffer_ut ...passed 00:02:37.188 Test: channel_ops ...passed 00:02:37.188 Test: channel_ops_sync ...passed 00:02:37.188 00:02:37.188 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.188 suites 1 1 n/a 0 0 00:02:37.188 tests 10 10 10 0 0 00:02:37.188 asserts 292 292 292 0 n/a 00:02:37.188 00:02:37.188 Elapsed time = 0.133 seconds 00:02:37.188 13:26:16 -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:02:37.188 00:02:37.188 00:02:37.188 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.188 http://cunit.sourceforge.net/ 00:02:37.188 00:02:37.188 00:02:37.188 Suite: blobfs_sync_ut 00:02:37.188 Test: cache_read_after_write ...passed 00:02:37.188 Test: file_length ...[2024-07-10 13:26:16.467220] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:02:37.188 passed 00:02:37.188 Test: append_write_to_extend_blob ...passed 00:02:37.188 Test: partial_buffer ...passed 00:02:37.188 Test: cache_write_null_buffer ...passed 00:02:37.188 Test: fs_create_sync ...passed 00:02:37.188 Test: fs_rename_sync ...passed 00:02:37.188 Test: cache_append_no_cache ...passed 00:02:37.188 Test: fs_delete_file_without_close ...passed 00:02:37.188 00:02:37.188 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.189 suites 1 1 n/a 0 0 00:02:37.189 tests 9 9 9 0 0 00:02:37.189 asserts 345 345 345 0 n/a 00:02:37.189 00:02:37.189 Elapsed time = 0.250 seconds 00:02:37.448 13:26:16 -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:02:37.448 00:02:37.448 00:02:37.448 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.448 http://cunit.sourceforge.net/ 00:02:37.448 00:02:37.448 00:02:37.448 Suite: blobfs_bdev_ut 00:02:37.448 Test: spdk_blobfs_bdev_detect_test ...[2024-07-10 13:26:16.557704] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:02:37.448 passed 00:02:37.448 Test: spdk_blobfs_bdev_create_test ...passed 00:02:37.448 Test: spdk_blobfs_bdev_mount_test ...passed 00:02:37.448 00:02:37.449 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.449 suites 1 1 n/a 0 0 00:02:37.449 tests 3 3 3 0 0 00:02:37.449 asserts 9 9 9 0 n/a 00:02:37.449 00:02:37.449 Elapsed time = 0.000 seconds 00:02:37.449 [2024-07-10 13:26:16.557992] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:02:37.449 00:02:37.449 real 0m9.453s 00:02:37.449 user 0m9.404s 00:02:37.449 sys 0m0.172s 00:02:37.449 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.449 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.449 ************************************ 00:02:37.449 END TEST unittest_blob_blobfs 00:02:37.449 ************************************ 00:02:37.449 13:26:16 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:02:37.449 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.449 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.449 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.449 ************************************ 00:02:37.449 START TEST unittest_event 00:02:37.449 ************************************ 00:02:37.449 13:26:16 -- common/autotest_common.sh@1104 -- # unittest_event 00:02:37.449 13:26:16 -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:02:37.449 00:02:37.449 00:02:37.449 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.449 http://cunit.sourceforge.net/ 00:02:37.449 00:02:37.449 00:02:37.449 Suite: app_suite 00:02:37.449 Test: test_spdk_app_parse_args ...app_ut [options] 00:02:37.449 options: 00:02:37.449 -c, --config JSON config file (default none) 00:02:37.449 --json JSON config file (default none) 00:02:37.449 --json-ignore-init-errors 00:02:37.449 don't exit on invalid config entry 00:02:37.449 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:02:37.449 -g, --single-file-segments 00:02:37.449 force creating just one hugetlbfs file 00:02:37.449 -h, --help show this usage 00:02:37.449 -i, --shm-id shared memory ID (optional) 00:02:37.449 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:02:37.449 --lcores lcore to CPU mapping list. The list is in the format: 00:02:37.449 [<,lcores[@CPUs]>...] 00:02:37.449 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:02:37.449 app_ut: invalid option -- z 00:02:37.449 Within the group, '-' is used for range separator, 00:02:37.449 ',' is used for single number separator. 00:02:37.449 '( )' can be omitted for single element group, 00:02:37.449 '@' can be omitted if cpus and lcores have the same value 00:02:37.449 -n, --mem-channels channel number of memory channels used for DPDK 00:02:37.449 -p, --main-core main (primary) core for DPDK 00:02:37.449 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:02:37.449 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:02:37.449 --disable-cpumask-locks Disable CPU core lock files. 00:02:37.449 --silence-noticelog disable notice level logging to stderr 00:02:37.449 --msg-mempool-size global message memory pool size in count (default: 262143) 00:02:37.449 -u, --no-pci disable PCI access 00:02:37.449 --wait-for-rpc wait for RPCs to initialize subsystems 00:02:37.449 --max-delay maximum reactor delay (in microseconds) 00:02:37.449 -B, --pci-blocked pci addr to block (can be used more than once) 00:02:37.449 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:02:37.449 -R, --huge-unlink unlink huge files after initialization 00:02:37.449 -v, --version print SPDK version 00:02:37.449 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:02:37.449 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:02:37.449 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:02:37.449 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:02:37.449 Tracepoints vary in size and can use more than one trace entry. 00:02:37.449 --rpcs-allowed comma-separated list of permitted RPCS 00:02:37.449 --env-context Opaque context for use of the env implementation 00:02:37.449 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:02:37.449 --no-huge run without using hugepages 00:02:37.449 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:02:37.449 -e, --tpoint-group [:] 00:02:37.449 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:02:37.449 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:02:37.449 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:02:37.449 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:02:37.449 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:02:37.449 app_ut: unrecognized option `--test-long-opt' 00:02:37.449 app_ut [options] 00:02:37.449 options: 00:02:37.449 -c, --config JSON config file (default none) 00:02:37.449 --json JSON config file (default none) 00:02:37.449 --json-ignore-init-errors 00:02:37.449 don't exit on invalid config entry 00:02:37.449 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:02:37.449 -g, --single-file-segments 00:02:37.449 force creating just one hugetlbfs file 00:02:37.449 -h, --help show this usage 00:02:37.449 -i, --shm-id shared memory ID (optional) 00:02:37.449 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:02:37.449 --lcores lcore to CPU mapping list. The list is in the format: 00:02:37.449 [<,lcores[@CPUs]>...] 00:02:37.449 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:02:37.449 Within the group, '-' is used for range separator, 00:02:37.449 ',' is used for single number separator. 00:02:37.449 '( )' can be omitted for single element group, 00:02:37.449 '@' can be omitted if cpus and lcores have the same value 00:02:37.449 -n, --mem-channels channel number of memory channels used for DPDK 00:02:37.449 -p, --main-core main (primary) core for DPDK 00:02:37.449 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:02:37.449 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:02:37.449 --disable-cpumask-locks Disable CPU core lock files. 00:02:37.449 --silence-noticelog disable notice level logging to stderr 00:02:37.449 --msg-mempool-size global message memory pool size in count (default: 262143) 00:02:37.449 -u, --no-pci disable PCI access 00:02:37.449 --wait-for-rpc wait for RPCs to initialize subsystems 00:02:37.449 --max-delay maximum reactor delay (in microseconds) 00:02:37.449 -B, --pci-blocked pci addr to block (can be used more than once) 00:02:37.449 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:02:37.449 -R, --huge-unlink unlink huge files after initialization 00:02:37.449 -v, --version print SPDK version 00:02:37.449 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:02:37.449 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:02:37.449 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:02:37.449 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:02:37.449 Tracepoints vary in size and can use more than one trace entry. 00:02:37.449 --rpcs-allowed comma-separated list of permitted RPCS 00:02:37.449 --env-context Opaque context for use of the env implementation 00:02:37.449 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:02:37.449 --no-huge run without using hugepages 00:02:37.449 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:02:37.449 -e, --tpoint-group [:] 00:02:37.449 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:02:37.449 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:02:37.449 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:02:37.449 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:02:37.449 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:02:37.449 [2024-07-10 13:26:16.603241] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1031:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:02:37.449 app_ut [options] 00:02:37.449 options: 00:02:37.449 -c, --config JSON config file (default none) 00:02:37.449 --json JSON config file (default none) 00:02:37.449 --json-ignore-init-errors 00:02:37.449 don't exit on invalid config entry 00:02:37.449 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:02:37.449 -g, --single-file-segments 00:02:37.449 force creating just one hugetlbfs file 00:02:37.449 -h, --help show this usage 00:02:37.449 -i, --shm-id shared memory ID (optional) 00:02:37.449 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:02:37.449 --lcores lcore to CPU mapping list. The list is in the format: 00:02:37.449 [<,lcores[@CPUs]>...] 00:02:37.449 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:02:37.449 Within the group, '-' is used for range separator, 00:02:37.449 ',' is used for single number separator. 00:02:37.449 '( )' can be omitted for single element group, 00:02:37.449 '@' can be omitted if cpus and lcores have the same value 00:02:37.449 -n, --mem-channels channel number of memory channels used for DPDK 00:02:37.449 -p, --main-core main (primary) core for DPDK 00:02:37.449 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:02:37.449 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:02:37.449 --disable-cpumask-locks Disable CPU core lock files. 00:02:37.449 --silence-noticelog disable notice level logging to stderr 00:02:37.449 --msg-mempool-size global message memory pool size in count (default: 262143) 00:02:37.449 -u, --no-pci disable PCI access 00:02:37.449 --wait-for-rpc wait for RPCs to initialize subsystems 00:02:37.450 --max-delay maximum reactor delay (in microseconds) 00:02:37.450 -B, --pci-blocked pci addr to block (can be used more than once) 00:02:37.450 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:02:37.450 -R, --huge-unlink unlink huge files after initialization 00:02:37.450 -v, --version print SPDK version 00:02:37.450 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:02:37.450 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:02:37.450 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:02:37.450 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:02:37.450 Tracepoints vary in size and can use more than one trace entry. 00:02:37.450 --rpcs-allowed comma-separated list of permitted RPCS 00:02:37.450 --env-context Opaque context for use of the env implementation 00:02:37.450 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:02:37.450 --no-huge run without using hugepages 00:02:37.450 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:02:37.450 -e, --tpoint-group [:] 00:02:37.450 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:02:37.450 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:02:37.450 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:02:37.450 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:02:37.450 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:02:37.450 [2024-07-10 13:26:16.603437] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:02:37.450 [2024-07-10 13:26:16.603509] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:02:37.450 passed 00:02:37.450 00:02:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.450 suites 1 1 n/a 0 0 00:02:37.450 tests 1 1 1 0 0 00:02:37.450 asserts 8 8 8 0 n/a 00:02:37.450 00:02:37.450 Elapsed time = 0.000 seconds 00:02:37.450 13:26:16 -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:02:37.450 00:02:37.450 00:02:37.450 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.450 http://cunit.sourceforge.net/ 00:02:37.450 00:02:37.450 00:02:37.450 Suite: app_suite 00:02:37.450 Test: test_create_reactor ...passed 00:02:37.450 Test: test_init_reactors ...passed 00:02:37.450 Test: test_event_call ...passed 00:02:37.450 Test: test_schedule_thread ...passed 00:02:37.450 Test: test_reschedule_thread ...passed 00:02:37.450 Test: test_bind_thread ...passed 00:02:37.450 Test: test_for_each_reactor ...passed 00:02:37.450 Test: test_reactor_stats ...passed 00:02:37.450 Test: test_scheduler ...passed 00:02:37.450 Test: test_governor ...passed 00:02:37.450 00:02:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.450 suites 1 1 n/a 0 0 00:02:37.450 tests 10 10 10 0 0 00:02:37.450 asserts 336 336 336 0 n/a 00:02:37.450 00:02:37.450 Elapsed time = 0.008 seconds 00:02:37.450 00:02:37.450 real 0m0.020s 00:02:37.450 user 0m0.018s 00:02:37.450 sys 0m0.015s 00:02:37.450 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 END TEST unittest_event 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- unit/unittest.sh@233 -- # uname -s 00:02:37.450 13:26:16 -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:02:37.450 13:26:16 -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:02:37.450 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.450 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 START TEST unittest_accel 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:02:37.450 00:02:37.450 00:02:37.450 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.450 http://cunit.sourceforge.net/ 00:02:37.450 00:02:37.450 00:02:37.450 Suite: accel_sequence 00:02:37.450 Test: test_sequence_fill_copy ...passed 00:02:37.450 Test: test_sequence_abort ...passed 00:02:37.450 Test: test_sequence_append_error ...passed 00:02:37.450 Test: test_sequence_completion_error ...[2024-07-10 13:26:16.680060] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82b345640 00:02:37.450 [2024-07-10 13:26:16.680439] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82b345640 00:02:37.450 [2024-07-10 13:26:16.680476] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82b345640 00:02:37.450 [2024-07-10 13:26:16.680496] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82b345640 00:02:37.450 passed 00:02:37.450 Test: test_sequence_decompress ...passed 00:02:37.450 Test: test_sequence_reverse ...passed 00:02:37.450 Test: test_sequence_copy_elision ...passed 00:02:37.450 Test: test_sequence_accel_buffers ...passed 00:02:37.450 Test: test_sequence_memory_domain ...[2024-07-10 13:26:16.682217] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1729:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:02:37.450 [2024-07-10 13:26:16.682278] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1768:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:02:37.450 passed 00:02:37.450 Test: test_sequence_module_memory_domain ...passed 00:02:37.450 Test: test_sequence_crypto ...passed 00:02:37.450 Test: test_sequence_driver ...[2024-07-10 13:26:16.683147] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1876:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82b345c40 using driver: ut 00:02:37.450 [2024-07-10 13:26:16.683204] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1941:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82b345c40 through driver: ut 00:02:37.450 passed 00:02:37.450 Test: test_sequence_same_iovs ...passed 00:02:37.450 Test: test_sequence_crc32 ...passed 00:02:37.450 Suite: accel 00:02:37.450 Test: test_spdk_accel_task_complete ...passed 00:02:37.450 Test: test_get_task ...passed 00:02:37.450 Test: test_spdk_accel_submit_copy ...passed 00:02:37.450 Test: test_spdk_accel_submit_dualcast ...[2024-07-10 13:26:16.683951] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:02:37.450 [2024-07-10 13:26:16.683981] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:02:37.450 passed 00:02:37.450 Test: test_spdk_accel_submit_compare ...passed 00:02:37.450 Test: test_spdk_accel_submit_fill ...passed 00:02:37.450 Test: test_spdk_accel_submit_crc32c ...passed 00:02:37.450 Test: test_spdk_accel_submit_crc32cv ...passed 00:02:37.450 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:02:37.450 Test: test_spdk_accel_submit_xor ...passed 00:02:37.450 Test: test_spdk_accel_module_find_by_name ...passed 00:02:37.450 Test: test_spdk_accel_module_register ...passed 00:02:37.450 00:02:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.450 suites 2 2 n/a 0 0 00:02:37.450 tests 26 26 26 0 0 00:02:37.450 asserts 831 831 831 0 n/a 00:02:37.450 00:02:37.450 Elapsed time = 0.008 seconds 00:02:37.450 00:02:37.450 real 0m0.017s 00:02:37.450 user 0m0.016s 00:02:37.450 sys 0m0.000s 00:02:37.450 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 END TEST unittest_accel 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:02:37.450 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.450 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 START TEST unittest_ioat 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:02:37.450 00:02:37.450 00:02:37.450 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.450 http://cunit.sourceforge.net/ 00:02:37.450 00:02:37.450 00:02:37.450 Suite: ioat 00:02:37.450 Test: ioat_state_check ...passed 00:02:37.450 00:02:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.450 suites 1 1 n/a 0 0 00:02:37.450 tests 1 1 1 0 0 00:02:37.450 asserts 32 32 32 0 n/a 00:02:37.450 00:02:37.450 Elapsed time = 0.000 seconds 00:02:37.450 00:02:37.450 real 0m0.008s 00:02:37.450 user 0m0.008s 00:02:37.450 sys 0m0.000s 00:02:37.450 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 END TEST unittest_ioat 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:37.450 13:26:16 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:02:37.450 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.450 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.450 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.450 ************************************ 00:02:37.450 START TEST unittest_idxd_user 00:02:37.450 ************************************ 00:02:37.450 13:26:16 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:02:37.450 00:02:37.450 00:02:37.450 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.450 http://cunit.sourceforge.net/ 00:02:37.450 00:02:37.450 00:02:37.450 Suite: idxd_user 00:02:37.451 Test: test_idxd_wait_cmd ...[2024-07-10 13:26:16.781321] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:02:37.451 passed 00:02:37.451 Test: test_idxd_reset_dev ...[2024-07-10 13:26:16.781524] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:02:37.451 [2024-07-10 13:26:16.781544] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:02:37.451 passed 00:02:37.451 Test: test_idxd_group_config ...passed 00:02:37.451 Test: test_idxd_wq_config ...passed 00:02:37.451 00:02:37.451 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.451 suites 1 1 n/a 0 0 00:02:37.451 tests 4 4 4 0 0 00:02:37.451 asserts 20 20 20 0 n/a 00:02:37.451 00:02:37.451 Elapsed time = 0.000 seconds 00:02:37.451 [2024-07-10 13:26:16.781555] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:02:37.451 00:02:37.451 real 0m0.005s 00:02:37.451 user 0m0.000s 00:02:37.451 sys 0m0.011s 00:02:37.451 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.451 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.451 ************************************ 00:02:37.451 END TEST unittest_idxd_user 00:02:37.451 ************************************ 00:02:37.710 13:26:16 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:02:37.710 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.710 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.710 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.710 ************************************ 00:02:37.710 START TEST unittest_iscsi 00:02:37.710 ************************************ 00:02:37.710 13:26:16 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:02:37.710 13:26:16 -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:02:37.710 00:02:37.710 00:02:37.710 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.710 http://cunit.sourceforge.net/ 00:02:37.710 00:02:37.710 00:02:37.710 Suite: conn_suite 00:02:37.710 Test: read_task_split_in_order_case ...passed 00:02:37.710 Test: read_task_split_reverse_order_case ...passed 00:02:37.710 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:02:37.710 Test: process_non_read_task_completion_test ...passed 00:02:37.710 Test: free_tasks_on_connection ...passed 00:02:37.710 Test: free_tasks_with_queued_datain ...passed 00:02:37.710 Test: abort_queued_datain_task_test ...passed 00:02:37.710 Test: abort_queued_datain_tasks_test ...passed 00:02:37.710 00:02:37.710 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.710 suites 1 1 n/a 0 0 00:02:37.710 tests 8 8 8 0 0 00:02:37.710 asserts 230 230 230 0 n/a 00:02:37.710 00:02:37.710 Elapsed time = 0.000 seconds 00:02:37.710 13:26:16 -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:02:37.710 00:02:37.710 00:02:37.710 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.710 http://cunit.sourceforge.net/ 00:02:37.710 00:02:37.710 00:02:37.710 Suite: iscsi_suite 00:02:37.710 Test: param_negotiation_test ...passed 00:02:37.710 Test: list_negotiation_test ...passed 00:02:37.710 Test: parse_valid_test ...passed 00:02:37.710 Test: parse_invalid_test ...[2024-07-10 13:26:16.836467] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:02:37.710 [2024-07-10 13:26:16.836670] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:02:37.710 [2024-07-10 13:26:16.836689] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:02:37.710 [2024-07-10 13:26:16.836715] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:02:37.710 [2024-07-10 13:26:16.836729] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:02:37.710 [2024-07-10 13:26:16.836740] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:02:37.710 passed 00:02:37.710 00:02:37.710 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.710 suites 1 1 n/a 0 0 00:02:37.710 tests 4 4 4 0 0 00:02:37.710 asserts 161 161 161 0 n/a 00:02:37.710 00:02:37.710 Elapsed time = 0.000 seconds 00:02:37.710 [2024-07-10 13:26:16.836750] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:02:37.710 13:26:16 -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:02:37.710 00:02:37.710 00:02:37.710 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.710 http://cunit.sourceforge.net/ 00:02:37.710 00:02:37.710 00:02:37.710 Suite: iscsi_target_node_suite 00:02:37.711 Test: add_lun_test_cases ...[2024-07-10 13:26:16.845134] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1249:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:02:37.711 [2024-07-10 13:26:16.845547] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:02:37.711 [2024-07-10 13:26:16.845591] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:02:37.711 [2024-07-10 13:26:16.845614] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:02:37.711 passed 00:02:37.711 Test: allow_any_allowed ...[2024-07-10 13:26:16.845633] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:02:37.711 passed 00:02:37.711 Test: allow_ipv6_allowed ...passed 00:02:37.711 Test: allow_ipv6_denied ...passed 00:02:37.711 Test: allow_ipv6_invalid ...passed 00:02:37.711 Test: allow_ipv4_allowed ...passed 00:02:37.711 Test: allow_ipv4_denied ...passed 00:02:37.711 Test: allow_ipv4_invalid ...passed 00:02:37.711 Test: node_access_allowed ...passed 00:02:37.711 Test: node_access_denied_by_empty_netmask ...passed 00:02:37.711 Test: node_access_multi_initiator_groups_cases ...passed 00:02:37.711 Test: allow_iscsi_name_multi_maps_case ...passed 00:02:37.711 Test: chap_param_test_cases ...[2024-07-10 13:26:16.845887] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:02:37.711 [2024-07-10 13:26:16.845928] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:02:37.711 [2024-07-10 13:26:16.845947] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:02:37.711 [2024-07-10 13:26:16.845967] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:02:37.711 passed 00:02:37.711 00:02:37.711 [2024-07-10 13:26:16.845986] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:02:37.711 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.711 suites 1 1 n/a 0 0 00:02:37.711 tests 13 13 13 0 0 00:02:37.711 asserts 50 50 50 0 n/a 00:02:37.711 00:02:37.711 Elapsed time = 0.008 seconds 00:02:37.711 13:26:16 -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:02:37.711 00:02:37.711 00:02:37.711 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.711 http://cunit.sourceforge.net/ 00:02:37.711 00:02:37.711 00:02:37.711 Suite: iscsi_suite 00:02:37.711 Test: op_login_check_target_test ...[2024-07-10 13:26:16.852218] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:02:37.711 passed 00:02:37.711 Test: op_login_session_normal_test ...[2024-07-10 13:26:16.852429] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:02:37.711 [2024-07-10 13:26:16.852444] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:02:37.711 [2024-07-10 13:26:16.852456] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:02:37.711 [2024-07-10 13:26:16.852491] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:02:37.711 [2024-07-10 13:26:16.852504] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:02:37.711 [2024-07-10 13:26:16.852525] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:02:37.711 [2024-07-10 13:26:16.852537] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:02:37.711 passed 00:02:37.711 Test: maxburstlength_test ...[2024-07-10 13:26:16.852582] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:02:37.711 passed 00:02:37.711 Test: underflow_for_read_transfer_test ...[2024-07-10 13:26:16.852601] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4551:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:02:37.711 passed 00:02:37.711 Test: underflow_for_zero_read_transfer_test ...passed 00:02:37.711 Test: underflow_for_request_sense_test ...passed 00:02:37.711 Test: underflow_for_check_condition_test ...passed 00:02:37.711 Test: add_transfer_task_test ...passed 00:02:37.711 Test: get_transfer_task_test ...passed 00:02:37.711 Test: del_transfer_task_test ...passed 00:02:37.711 Test: clear_all_transfer_tasks_test ...passed 00:02:37.711 Test: build_iovs_test ...passed 00:02:37.711 Test: build_iovs_with_md_test ...passed 00:02:37.711 Test: pdu_hdr_op_login_test ...[2024-07-10 13:26:16.852750] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:02:37.711 [2024-07-10 13:26:16.852772] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:02:37.711 [2024-07-10 13:26:16.852784] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_text_test ...[2024-07-10 13:26:16.852798] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2241:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:02:37.711 [2024-07-10 13:26:16.852809] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_logout_test ...[2024-07-10 13:26:16.852820] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2286:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_scsi_test ...[2024-07-10 13:26:16.852833] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2517:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:02:37.711 [2024-07-10 13:26:16.852847] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:02:37.711 [2024-07-10 13:26:16.852858] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:02:37.711 [2024-07-10 13:26:16.852868] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:02:37.711 [2024-07-10 13:26:16.852880] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3398:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:02:37.711 [2024-07-10 13:26:16.852891] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3405:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-10 13:26:16.852903] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:02:37.711 [2024-07-10 13:26:16.852916] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_nopout_test ...[2024-07-10 13:26:16.852926] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:02:37.711 [2024-07-10 13:26:16.852941] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:02:37.711 [2024-07-10 13:26:16.852952] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:02:37.711 [2024-07-10 13:26:16.852961] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:02:37.711 [2024-07-10 13:26:16.852981] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:02:37.711 passed 00:02:37.711 Test: pdu_hdr_op_data_test ...[2024-07-10 13:26:16.852994] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:02:37.711 [2024-07-10 13:26:16.853008] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:02:37.711 [2024-07-10 13:26:16.853019] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:02:37.711 [2024-07-10 13:26:16.853030] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:02:37.711 [2024-07-10 13:26:16.853041] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:02:37.711 [2024-07-10 13:26:16.853052] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:02:37.711 [2024-07-10 13:26:16.853062] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4245:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:02:37.711 passed 00:02:37.711 Test: empty_text_with_cbit_test ...passed 00:02:37.711 Test: pdu_payload_read_test ...[2024-07-10 13:26:16.853451] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4632:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:02:37.711 passed 00:02:37.711 Test: data_out_pdu_sequence_test ...passed 00:02:37.711 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:02:37.711 00:02:37.711 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.711 suites 1 1 n/a 0 0 00:02:37.711 tests 24 24 24 0 0 00:02:37.711 asserts 150253 150253 150253 0 n/a 00:02:37.711 00:02:37.711 Elapsed time = 0.000 seconds 00:02:37.711 13:26:16 -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:02:37.711 00:02:37.711 00:02:37.711 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.711 http://cunit.sourceforge.net/ 00:02:37.711 00:02:37.711 00:02:37.711 Suite: init_grp_suite 00:02:37.711 Test: create_initiator_group_success_case ...passed 00:02:37.711 Test: find_initiator_group_success_case ...passed 00:02:37.711 Test: register_initiator_group_twice_case ...passed 00:02:37.711 Test: add_initiator_name_success_case ...passed 00:02:37.711 Test: add_initiator_name_fail_case ...[2024-07-10 13:26:16.862613] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:02:37.711 passed 00:02:37.711 Test: delete_all_initiator_names_success_case ...passed 00:02:37.711 Test: add_netmask_success_case ...passed 00:02:37.711 Test: add_netmask_fail_case ...passed 00:02:37.711 Test: delete_all_netmasks_success_case ...passed 00:02:37.711 Test: initiator_name_overwrite_all_to_any_case ...[2024-07-10 13:26:16.863036] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:02:37.711 passed 00:02:37.711 Test: netmask_overwrite_all_to_any_case ...passed 00:02:37.711 Test: add_delete_initiator_names_case ...passed 00:02:37.711 Test: add_duplicated_initiator_names_case ...passed 00:02:37.711 Test: delete_nonexisting_initiator_names_case ...passed 00:02:37.711 Test: add_delete_netmasks_case ...passed 00:02:37.711 Test: add_duplicated_netmasks_case ...passed 00:02:37.711 Test: delete_nonexisting_netmasks_case ...passed 00:02:37.711 00:02:37.711 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.711 suites 1 1 n/a 0 0 00:02:37.711 tests 17 17 17 0 0 00:02:37.711 asserts 108 108 108 0 n/a 00:02:37.711 00:02:37.711 Elapsed time = 0.000 seconds 00:02:37.712 13:26:16 -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: portal_grp_suite 00:02:37.712 Test: portal_create_ipv4_normal_case ...passed 00:02:37.712 Test: portal_create_ipv6_normal_case ...passed 00:02:37.712 Test: portal_create_ipv4_wildcard_case ...passed 00:02:37.712 Test: portal_create_ipv6_wildcard_case ...passed 00:02:37.712 Test: portal_create_twice_case ...[2024-07-10 13:26:16.870428] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:02:37.712 passed 00:02:37.712 Test: portal_grp_register_unregister_case ...passed 00:02:37.712 Test: portal_grp_register_twice_case ...passed 00:02:37.712 Test: portal_grp_add_delete_case ...passed 00:02:37.712 Test: portal_grp_add_delete_twice_case ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 9 9 9 0 0 00:02:37.712 asserts 44 44 44 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 00:02:37.712 real 0m0.048s 00:02:37.712 user 0m0.017s 00:02:37.712 sys 0m0.033s 00:02:37.712 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.712 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 END TEST unittest_iscsi 00:02:37.712 ************************************ 00:02:37.712 13:26:16 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:02:37.712 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.712 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.712 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 START TEST unittest_json 00:02:37.712 ************************************ 00:02:37.712 13:26:16 -- common/autotest_common.sh@1104 -- # unittest_json 00:02:37.712 13:26:16 -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: json 00:02:37.712 Test: test_parse_literal ...passed 00:02:37.712 Test: test_parse_string_simple ...passed 00:02:37.712 Test: test_parse_string_control_chars ...passed 00:02:37.712 Test: test_parse_string_utf8 ...passed 00:02:37.712 Test: test_parse_string_escapes_twochar ...passed 00:02:37.712 Test: test_parse_string_escapes_unicode ...passed 00:02:37.712 Test: test_parse_number ...passed 00:02:37.712 Test: test_parse_array ...passed 00:02:37.712 Test: test_parse_object ...passed 00:02:37.712 Test: test_parse_nesting ...passed 00:02:37.712 Test: test_parse_comment ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 11 11 11 0 0 00:02:37.712 asserts 1516 1516 1516 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 13:26:16 -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: json 00:02:37.712 Test: test_strequal ...passed 00:02:37.712 Test: test_num_to_uint16 ...passed 00:02:37.712 Test: test_num_to_int32 ...passed 00:02:37.712 Test: test_num_to_uint64 ...passed 00:02:37.712 Test: test_decode_object ...passed 00:02:37.712 Test: test_decode_array ...passed 00:02:37.712 Test: test_decode_bool ...passed 00:02:37.712 Test: test_decode_uint16 ...passed 00:02:37.712 Test: test_decode_int32 ...passed 00:02:37.712 Test: test_decode_uint32 ...passed 00:02:37.712 Test: test_decode_uint64 ...passed 00:02:37.712 Test: test_decode_string ...passed 00:02:37.712 Test: test_decode_uuid ...passed 00:02:37.712 Test: test_find ...passed 00:02:37.712 Test: test_find_array ...passed 00:02:37.712 Test: test_iterating ...passed 00:02:37.712 Test: test_free_object ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 17 17 17 0 0 00:02:37.712 asserts 236 236 236 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 13:26:16 -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: json 00:02:37.712 Test: test_write_literal ...passed 00:02:37.712 Test: test_write_string_simple ...passed 00:02:37.712 Test: test_write_string_escapes ...passed 00:02:37.712 Test: test_write_string_utf16le ...passed 00:02:37.712 Test: test_write_number_int32 ...passed 00:02:37.712 Test: test_write_number_uint32 ...passed 00:02:37.712 Test: test_write_number_uint128 ...passed 00:02:37.712 Test: test_write_string_number_uint128 ...passed 00:02:37.712 Test: test_write_number_int64 ...passed 00:02:37.712 Test: test_write_number_uint64 ...passed 00:02:37.712 Test: test_write_number_double ...passed 00:02:37.712 Test: test_write_uuid ...passed 00:02:37.712 Test: test_write_array ...passed 00:02:37.712 Test: test_write_object ...passed 00:02:37.712 Test: test_write_nesting ...passed 00:02:37.712 Test: test_write_val ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 16 16 16 0 0 00:02:37.712 asserts 918 918 918 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 13:26:16 -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: jsonrpc 00:02:37.712 Test: test_parse_request ...passed 00:02:37.712 Test: test_parse_request_streaming ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 2 2 2 0 0 00:02:37.712 asserts 289 289 289 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 00:02:37.712 real 0m0.036s 00:02:37.712 user 0m0.011s 00:02:37.712 sys 0m0.026s 00:02:37.712 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.712 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 END TEST unittest_json 00:02:37.712 ************************************ 00:02:37.712 13:26:16 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:02:37.712 13:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.712 13:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.712 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 START TEST unittest_rpc 00:02:37.712 ************************************ 00:02:37.712 13:26:16 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:02:37.712 13:26:16 -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: rpc 00:02:37.712 Test: test_jsonrpc_handler ...passed 00:02:37.712 Test: test_spdk_rpc_is_method_allowed ...passed 00:02:37.712 Test: test_rpc_get_methods ...[2024-07-10 13:26:16.986259] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:02:37.712 passed 00:02:37.712 Test: test_rpc_spdk_get_version ...passed 00:02:37.712 Test: test_spdk_rpc_listen_close ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 5 5 5 0 0 00:02:37.712 asserts 20 20 20 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 00:02:37.712 real 0m0.009s 00:02:37.712 user 0m0.009s 00:02:37.712 sys 0m0.000s 00:02:37.712 13:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.712 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 END TEST unittest_rpc 00:02:37.712 ************************************ 00:02:37.712 13:26:17 -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:02:37.712 13:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.712 13:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.712 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:02:37.712 ************************************ 00:02:37.712 START TEST unittest_notify 00:02:37.712 ************************************ 00:02:37.712 13:26:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:02:37.712 00:02:37.712 00:02:37.712 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.712 http://cunit.sourceforge.net/ 00:02:37.712 00:02:37.712 00:02:37.712 Suite: app_suite 00:02:37.712 Test: notify ...passed 00:02:37.712 00:02:37.712 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.712 suites 1 1 n/a 0 0 00:02:37.712 tests 1 1 1 0 0 00:02:37.712 asserts 13 13 13 0 n/a 00:02:37.712 00:02:37.712 Elapsed time = 0.000 seconds 00:02:37.712 00:02:37.712 real 0m0.006s 00:02:37.712 user 0m0.001s 00:02:37.712 sys 0m0.008s 00:02:37.713 13:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.713 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:02:37.713 ************************************ 00:02:37.713 END TEST unittest_notify 00:02:37.713 ************************************ 00:02:37.972 13:26:17 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:02:37.972 13:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.972 13:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.972 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:02:37.972 ************************************ 00:02:37.972 START TEST unittest_nvme 00:02:37.972 ************************************ 00:02:37.972 13:26:17 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:02:37.972 13:26:17 -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:02:37.972 00:02:37.972 00:02:37.972 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.972 http://cunit.sourceforge.net/ 00:02:37.972 00:02:37.972 00:02:37.972 Suite: nvme 00:02:37.972 Test: test_opc_data_transfer ...passed 00:02:37.972 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:02:37.972 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:02:37.972 Test: test_trid_parse_and_compare ...[2024-07-10 13:26:17.086254] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:02:37.972 [2024-07-10 13:26:17.086477] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:02:37.972 [2024-07-10 13:26:17.086618] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1180:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:02:37.972 [2024-07-10 13:26:17.086642] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:02:37.972 [2024-07-10 13:26:17.086653] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:02:37.972 [2024-07-10 13:26:17.086664] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:02:37.972 passed 00:02:37.972 Test: test_trid_trtype_str ...passed 00:02:37.972 Test: test_trid_adrfam_str ...passed 00:02:37.972 Test: test_nvme_ctrlr_probe ...[2024-07-10 13:26:17.086751] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:02:37.972 passed 00:02:37.972 Test: test_spdk_nvme_probe ...[2024-07-10 13:26:17.086770] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:02:37.972 [2024-07-10 13:26:17.086780] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:02:37.972 [2024-07-10 13:26:17.086791] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:02:37.972 passed 00:02:37.972 Test: test_spdk_nvme_connect ...[2024-07-10 13:26:17.086801] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:02:37.972 [2024-07-10 13:26:17.086820] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:02:37.972 passed 00:02:37.972 Test: test_nvme_ctrlr_probe_internal ...[2024-07-10 13:26:17.086871] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:02:37.973 [2024-07-10 13:26:17.086882] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:02:37.973 [2024-07-10 13:26:17.086903] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:02:37.973 [2024-07-10 13:26:17.086914] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:02:37.973 passed 00:02:37.973 Test: test_nvme_init_controllers ...passed 00:02:37.973 Test: test_nvme_driver_init ...[2024-07-10 13:26:17.086929] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:02:37.973 [2024-07-10 13:26:17.086948] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:02:37.973 [2024-07-10 13:26:17.086960] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:02:37.973 passed 00:02:37.973 Test: test_spdk_nvme_detach ...passed 00:02:37.973 Test: test_nvme_completion_poll_cb ...passed 00:02:37.973 Test: test_nvme_user_copy_cmd_complete ...passed 00:02:37.973 Test: test_nvme_allocate_request_null ...passed 00:02:37.973 Test: test_nvme_allocate_request ...passed 00:02:37.973 Test: test_nvme_free_request ...passed 00:02:37.973 Test: test_nvme_allocate_request_user_copy ...passed 00:02:37.973 Test: test_nvme_robust_mutex_init_shared ...passed 00:02:37.973 Test: test_nvme_request_check_timeout ...passed 00:02:37.973 Test: test_nvme_wait_for_completion ...passed 00:02:37.973 Test: test_spdk_nvme_parse_func ...passed 00:02:37.973 Test: test_spdk_nvme_detach_async ...passed 00:02:37.973 Test: test_nvme_parse_addr ...passed 00:02:37.973 00:02:37.973 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.973 suites 1 1 n/a 0 0 00:02:37.973 tests 25 25 25 0 0 00:02:37.973 asserts 326 326 326 0 n/a 00:02:37.973 00:02:37.973 Elapsed time = 0.000 seconds 00:02:37.973 [2024-07-10 13:26:17.196382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:02:37.973 [2024-07-10 13:26:17.196637] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:02:37.973 13:26:17 -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:02:37.973 00:02:37.973 00:02:37.973 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.973 http://cunit.sourceforge.net/ 00:02:37.973 00:02:37.973 00:02:37.973 Suite: nvme_ctrlr 00:02:37.973 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-10 13:26:17.202760] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-10 13:26:17.204135] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-10 13:26:17.205273] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-10 13:26:17.206430] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-10 13:26:17.207592] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.208714] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:26:17.209869] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:26:17.210996] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-10 13:26:17.213236] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.215485] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:26:17.216652] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:02:37.973 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-10 13:26:17.218970] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.220167] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:26:17.222462] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:02:37.973 Test: test_nvme_ctrlr_init_delay ...[2024-07-10 13:26:17.224782] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_alloc_io_qpair_rr_1 ...[2024-07-10 13:26:17.225986] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.226051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:02:37.973 [2024-07-10 13:26:17.226080] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:02:37.973 [2024-07-10 13:26:17.226103] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:02:37.973 passed 00:02:37.973 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-07-10 13:26:17.226122] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:02:37.973 passed 00:02:37.973 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:02:37.973 Test: test_alloc_io_qpair_wrr_1 ...passed 00:02:37.973 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-10 13:26:17.226246] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.226294] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.226322] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:02:37.973 passed 00:02:37.973 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-10 13:26:17.226380] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:02:37.973 [2024-07-10 13:26:17.226403] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_fail ...[2024-07-10 13:26:17.226424] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:02:37.973 [2024-07-10 13:26:17.226445] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:02:37.973 [2024-07-10 13:26:17.226469] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:02:37.973 Test: test_nvme_ctrlr_set_supported_features ...passed 00:02:37.973 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:02:37.973 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-10 13:26:17.226565] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:02:37.973 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:02:37.973 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:02:37.973 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-10 13:26:17.270214] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-10 13:26:17.276769] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-10 13:26:17.277888] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.277904] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2871:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:02:37.973 passed 00:02:37.973 Test: test_alloc_io_qpair_fail ...passed 00:02:37.973 Test: test_nvme_ctrlr_add_remove_process ...[2024-07-10 13:26:17.279001] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 [2024-07-10 13:26:17.279018] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:02:37.973 Test: test_nvme_ctrlr_set_state ...passed 00:02:37.973 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-10 13:26:17.279035] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1466:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:02:37.973 [2024-07-10 13:26:17.279043] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-10 13:26:17.282431] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-10 13:26:17.290642] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_reset ...[2024-07-10 13:26:17.291818] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_aer_callback ...[2024-07-10 13:26:17.291889] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-10 13:26:17.293054] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:02:37.973 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:02:37.973 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-10 13:26:17.294344] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.973 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:02:37.973 Test: test_nvme_ctrlr_ana_resize ...[2024-07-10 13:26:17.295523] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.973 passed 00:02:37.974 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:02:37.974 Test: test_nvme_transport_ctrlr_ready ...[2024-07-10 13:26:17.296717] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:02:37.974 passed 00:02:37.974 Test: test_nvme_ctrlr_disable ...[2024-07-10 13:26:17.296751] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:02:37.974 [2024-07-10 13:26:17.296768] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4137:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:02:37.974 passed 00:02:37.974 00:02:37.974 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.974 suites 1 1 n/a 0 0 00:02:37.974 tests 43 43 43 0 0 00:02:37.974 asserts 10418 10418 10418 0 n/a 00:02:37.974 00:02:37.974 Elapsed time = 0.055 seconds 00:02:37.974 13:26:17 -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:02:37.974 00:02:37.974 00:02:37.974 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.974 http://cunit.sourceforge.net/ 00:02:37.974 00:02:37.974 00:02:37.974 Suite: nvme_ctrlr_cmd 00:02:37.974 Test: test_get_log_pages ...passed 00:02:37.974 Test: test_set_feature_cmd ...passed 00:02:37.974 Test: test_set_feature_ns_cmd ...passed 00:02:37.974 Test: test_get_feature_cmd ...passed 00:02:37.974 Test: test_get_feature_ns_cmd ...passed 00:02:37.974 Test: test_abort_cmd ...passed 00:02:37.974 Test: test_set_host_id_cmds ...[2024-07-10 13:26:17.309220] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:02:37.974 passed 00:02:37.974 Test: test_io_cmd_raw_no_payload_build ...passed 00:02:37.974 Test: test_io_raw_cmd ...passed 00:02:37.974 Test: test_io_raw_cmd_with_md ...passed 00:02:37.974 Test: test_namespace_attach ...passed 00:02:37.974 Test: test_namespace_detach ...passed 00:02:37.974 Test: test_namespace_create ...passed 00:02:37.974 Test: test_namespace_delete ...passed 00:02:37.974 Test: test_doorbell_buffer_config ...passed 00:02:37.974 Test: test_format_nvme ...passed 00:02:37.974 Test: test_fw_commit ...passed 00:02:37.974 Test: test_fw_image_download ...passed 00:02:37.974 Test: test_sanitize ...passed 00:02:37.974 Test: test_directive ...passed 00:02:37.974 Test: test_nvme_request_add_abort ...passed 00:02:37.974 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:02:37.974 Test: test_nvme_ctrlr_cmd_identify ...passed 00:02:37.974 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:02:37.974 00:02:37.974 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.974 suites 1 1 n/a 0 0 00:02:37.974 tests 24 24 24 0 0 00:02:37.974 asserts 198 198 198 0 n/a 00:02:37.974 00:02:37.974 Elapsed time = 0.000 seconds 00:02:37.974 13:26:17 -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:02:37.974 00:02:37.974 00:02:37.974 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.974 http://cunit.sourceforge.net/ 00:02:37.974 00:02:37.974 00:02:37.974 Suite: nvme_ctrlr_cmd 00:02:37.974 Test: test_geometry_cmd ...passed 00:02:37.974 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:02:37.974 00:02:37.974 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.974 suites 1 1 n/a 0 0 00:02:37.974 tests 2 2 2 0 0 00:02:37.974 asserts 7 7 7 0 n/a 00:02:37.974 00:02:37.974 Elapsed time = 0.000 seconds 00:02:37.974 13:26:17 -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:02:37.974 00:02:37.974 00:02:37.974 CUnit - A unit testing framework for C - Version 2.1-3 00:02:37.974 http://cunit.sourceforge.net/ 00:02:37.974 00:02:37.974 00:02:37.974 Suite: nvme 00:02:37.974 Test: test_nvme_ns_construct ...passed 00:02:37.974 Test: test_nvme_ns_uuid ...passed 00:02:37.974 Test: test_nvme_ns_csi ...passed 00:02:37.974 Test: test_nvme_ns_data ...passed 00:02:37.974 Test: test_nvme_ns_set_identify_data ...passed 00:02:37.974 Test: test_spdk_nvme_ns_get_values ...passed 00:02:37.974 Test: test_spdk_nvme_ns_is_active ...passed 00:02:37.974 Test: spdk_nvme_ns_supports ...passed 00:02:37.974 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:02:37.974 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:02:37.974 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:02:37.974 Test: test_nvme_ns_find_id_desc ...passed 00:02:37.974 00:02:37.974 Run Summary: Type Total Ran Passed Failed Inactive 00:02:37.974 suites 1 1 n/a 0 0 00:02:37.974 tests 12 12 12 0 0 00:02:37.974 asserts 83 83 83 0 n/a 00:02:37.974 00:02:37.974 Elapsed time = 0.000 seconds 00:02:37.974 13:26:17 -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:02:38.236 00:02:38.236 00:02:38.236 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.236 http://cunit.sourceforge.net/ 00:02:38.236 00:02:38.236 00:02:38.236 Suite: nvme_ns_cmd 00:02:38.236 Test: split_test ...passed 00:02:38.236 Test: split_test2 ...passed 00:02:38.236 Test: split_test3 ...passed 00:02:38.236 Test: split_test4 ...passed 00:02:38.236 Test: test_nvme_ns_cmd_flush ...passed 00:02:38.236 Test: test_nvme_ns_cmd_dataset_management ...passed 00:02:38.236 Test: test_nvme_ns_cmd_copy ...passed 00:02:38.236 Test: test_io_flags ...[2024-07-10 13:26:17.336329] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:02:38.236 passed 00:02:38.236 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:02:38.236 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:02:38.236 Test: test_nvme_ns_cmd_reservation_register ...passed 00:02:38.236 Test: test_nvme_ns_cmd_reservation_release ...passed 00:02:38.236 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:02:38.236 Test: test_nvme_ns_cmd_reservation_report ...passed 00:02:38.236 Test: test_cmd_child_request ...passed 00:02:38.236 Test: test_nvme_ns_cmd_readv ...passed 00:02:38.236 Test: test_nvme_ns_cmd_read_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_writev ...[2024-07-10 13:26:17.336907] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 288:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:02:38.236 passed 00:02:38.236 Test: test_nvme_ns_cmd_write_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_comparev ...passed 00:02:38.236 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:02:38.236 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:02:38.236 Test: test_nvme_ns_cmd_setup_request ...passed 00:02:38.236 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:02:38.236 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-10 13:26:17.337123] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:02:38.236 passed 00:02:38.236 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:02:38.236 Test: test_nvme_ns_cmd_verify ...passed 00:02:38.236 Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-07-10 13:26:17.337168] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:02:38.236 passed 00:02:38.236 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:02:38.236 00:02:38.236 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.236 suites 1 1 n/a 0 0 00:02:38.236 tests 32 32 32 0 0 00:02:38.236 asserts 550 550 550 0 n/a 00:02:38.236 00:02:38.236 Elapsed time = 0.008 seconds 00:02:38.236 13:26:17 -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:02:38.236 00:02:38.236 00:02:38.236 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.236 http://cunit.sourceforge.net/ 00:02:38.236 00:02:38.236 00:02:38.236 Suite: nvme_ns_cmd 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:02:38.236 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:02:38.236 00:02:38.236 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.236 suites 1 1 n/a 0 0 00:02:38.236 tests 12 12 12 0 0 00:02:38.236 asserts 123 123 123 0 n/a 00:02:38.236 00:02:38.236 Elapsed time = 0.000 seconds 00:02:38.236 13:26:17 -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:02:38.236 00:02:38.236 00:02:38.236 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.236 http://cunit.sourceforge.net/ 00:02:38.236 00:02:38.236 00:02:38.236 Suite: nvme_qpair 00:02:38.236 Test: test3 ...passed 00:02:38.236 Test: test_ctrlr_failed ...passed 00:02:38.236 Test: struct_packing ...passed 00:02:38.236 Test: test_nvme_qpair_process_completions ...[2024-07-10 13:26:17.355658] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:02:38.236 [2024-07-10 13:26:17.355981] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:02:38.236 [2024-07-10 13:26:17.356067] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:02:38.236 [2024-07-10 13:26:17.356096] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:02:38.236 passed 00:02:38.236 Test: test_nvme_completion_is_retry ...passed 00:02:38.236 Test: test_get_status_string ...passed 00:02:38.236 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:02:38.236 Test: test_nvme_qpair_submit_request ...passed 00:02:38.236 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:02:38.236 Test: test_nvme_qpair_manual_complete_request ...passed 00:02:38.236 Test: test_nvme_qpair_init_deinit ...[2024-07-10 13:26:17.356166] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:02:38.236 passed 00:02:38.236 Test: test_nvme_get_sgl_print_info ...passed 00:02:38.236 00:02:38.236 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.236 suites 1 1 n/a 0 0 00:02:38.236 tests 12 12 12 0 0 00:02:38.236 asserts 154 154 154 0 n/a 00:02:38.236 00:02:38.236 Elapsed time = 0.000 seconds 00:02:38.236 13:26:17 -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:02:38.236 00:02:38.236 00:02:38.236 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.236 http://cunit.sourceforge.net/ 00:02:38.236 00:02:38.236 00:02:38.236 Suite: nvme_pcie 00:02:38.236 Test: test_prp_list_append ...[2024-07-10 13:26:17.365302] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:02:38.236 [2024-07-10 13:26:17.365747] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:02:38.236 [2024-07-10 13:26:17.365789] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:02:38.236 [2024-07-10 13:26:17.365911] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:02:38.236 [2024-07-10 13:26:17.366000] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:02:38.236 passed 00:02:38.236 Test: test_nvme_pcie_hotplug_monitor ...passed 00:02:38.236 Test: test_shadow_doorbell_update ...passed 00:02:38.236 Test: test_build_contig_hw_sgl_request ...passed 00:02:38.236 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:02:38.236 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:02:38.236 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:02:38.236 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-10 13:26:17.366192] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:02:38.236 passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-10 13:26:17.366238] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:02:38.236 passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:02:38.236 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-10 13:26:17.366267] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:02:38.236 [2024-07-10 13:26:17.366298] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:02:38.236 [2024-07-10 13:26:17.366323] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:02:38.236 passed 00:02:38.236 00:02:38.236 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.236 suites 1 1 n/a 0 0 00:02:38.236 tests 14 14 14 0 0 00:02:38.236 asserts 235 235 235 0 n/a 00:02:38.236 00:02:38.236 Elapsed time = 0.000 seconds 00:02:38.236 13:26:17 -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:02:38.236 00:02:38.236 00:02:38.236 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.236 http://cunit.sourceforge.net/ 00:02:38.236 00:02:38.236 00:02:38.236 Suite: nvme_ns_cmd 00:02:38.236 Test: nvme_poll_group_create_test ...passed 00:02:38.236 Test: nvme_poll_group_add_remove_test ...passed 00:02:38.236 Test: nvme_poll_group_process_completions ...passed 00:02:38.236 Test: nvme_poll_group_destroy_test ...passed 00:02:38.236 Test: nvme_poll_group_get_free_stats ...passed 00:02:38.236 00:02:38.236 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.237 suites 1 1 n/a 0 0 00:02:38.237 tests 5 5 5 0 0 00:02:38.237 asserts 75 75 75 0 n/a 00:02:38.237 00:02:38.237 Elapsed time = 0.000 seconds 00:02:38.237 13:26:17 -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:02:38.237 00:02:38.237 00:02:38.237 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.237 http://cunit.sourceforge.net/ 00:02:38.237 00:02:38.237 00:02:38.237 Suite: nvme_quirks 00:02:38.237 Test: test_nvme_quirks_striping ...passed 00:02:38.237 00:02:38.237 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.237 suites 1 1 n/a 0 0 00:02:38.237 tests 1 1 1 0 0 00:02:38.237 asserts 5 5 5 0 n/a 00:02:38.237 00:02:38.237 Elapsed time = 0.000 seconds 00:02:38.237 13:26:17 -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:02:38.237 00:02:38.237 00:02:38.237 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.237 http://cunit.sourceforge.net/ 00:02:38.237 00:02:38.237 00:02:38.237 Suite: nvme_tcp 00:02:38.237 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:02:38.237 Test: test_nvme_tcp_build_iovs ...passed 00:02:38.237 Test: test_nvme_tcp_build_sgl_request ...[2024-07-10 13:26:17.383895] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8209a0a80, and the iovcnt=16, remaining_size=28672 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:02:38.237 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:02:38.237 Test: test_nvme_tcp_req_complete_safe ...passed 00:02:38.237 Test: test_nvme_tcp_req_get ...passed 00:02:38.237 Test: test_nvme_tcp_req_init ...passed 00:02:38.237 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:02:38.237 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:02:38.237 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-10 13:26:17.384131] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a25f0 is same with the state(6) to be set 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_alloc_reqs ...passed 00:02:38.237 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:02:38.237 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-10 13:26:17.384156] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1940 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384171] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8209a1ee8 00:02:38.237 [2024-07-10 13:26:17.384179] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:02:38.237 [2024-07-10 13:26:17.384186] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384192] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:02:38.237 [2024-07-10 13:26:17.384199] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384206] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:02:38.237 [2024-07-10 13:26:17.384213] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384219] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384226] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384232] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384239] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-10 13:26:17.384246] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1d78 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.384269] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:02:38.237 [2024-07-10 13:26:17.384277] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:02:38.237 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:02:38.237 Test: test_nvme_tcp_icresp_handle ...[2024-07-10 13:26:17.470587] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:02:38.237 [2024-07-10 13:26:17.470693] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8209a2320): PDU Sequence Error 00:02:38.237 [2024-07-10 13:26:17.470712] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:02:38.237 [2024-07-10 13:26:17.470726] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1516:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:02:38.237 [2024-07-10 13:26:17.470737] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1940 is same with the state(5) to be set 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:02:38.237 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:02:38.237 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:02:38.237 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-10 13:26:17.470748] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:02:38.237 [2024-07-10 13:26:17.470757] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1940 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.470768] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a1940 is same with the state(0) to be set 00:02:38.237 [2024-07-10 13:26:17.470788] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8209a2320): PDU Sequence Error 00:02:38.237 [2024-07-10 13:26:17.470809] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8209a0be0 00:02:38.237 [2024-07-10 13:26:17.470837] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8209a0368, errno=0, rc=0 00:02:38.237 [2024-07-10 13:26:17.470849] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a0368 is same with the state(5) to be set 00:02:38.237 [2024-07-10 13:26:17.470861] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209a0368 is same with the state(5) to be set 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-10 13:26:17.470926] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8209a0368 (0): No error: 0 00:02:38.237 [2024-07-10 13:26:17.470947] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8209a0368 (0): No error: 0 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:02:38.237 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:02:38.237 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-10 13:26:17.526768] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:02:38.237 [2024-07-10 13:26:17.526858] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:02:38.237 [2024-07-10 13:26:17.526920] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:38.237 [2024-07-10 13:26:17.526931] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:38.237 [2024-07-10 13:26:17.526983] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:02:38.237 [2024-07-10 13:26:17.526994] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:02:38.237 passed 00:02:38.237 Test: test_nvme_tcp_qpair_submit_request ...passed 00:02:38.237 00:02:38.237 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.237 suites 1 1 n/a 0 0 00:02:38.237 tests 27 27 27 0 0 00:02:38.237 asserts 624 624 624 0 n/a 00:02:38.237 00:02:38.237 Elapsed time = 0.055 seconds 00:02:38.237 [2024-07-10 13:26:17.527007] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:02:38.237 [2024-07-10 13:26:17.527017] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:02:38.237 [2024-07-10 13:26:17.527032] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2290:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82d5ad180 with addr=192.168.1.78, port=23 00:02:38.237 [2024-07-10 13:26:17.527040] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:02:38.237 [2024-07-10 13:26:17.527058] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82d5ad300, and the iovcnt=1, remaining_size=1024 00:02:38.237 [2024-07-10 13:26:17.527076] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:02:38.237 13:26:17 -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:02:38.237 00:02:38.237 00:02:38.237 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.237 http://cunit.sourceforge.net/ 00:02:38.237 00:02:38.237 00:02:38.237 Suite: nvme_transport 00:02:38.237 Test: test_nvme_get_transport ...passed 00:02:38.237 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:02:38.237 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:02:38.237 Test: test_nvme_transport_poll_group_add_remove ...passed 00:02:38.237 Test: test_ctrlr_get_memory_domains ...passed 00:02:38.237 00:02:38.237 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.237 suites 1 1 n/a 0 0 00:02:38.237 tests 5 5 5 0 0 00:02:38.237 asserts 28 28 28 0 n/a 00:02:38.237 00:02:38.237 Elapsed time = 0.000 seconds 00:02:38.237 13:26:17 -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:02:38.237 00:02:38.237 00:02:38.237 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.237 http://cunit.sourceforge.net/ 00:02:38.238 00:02:38.238 00:02:38.238 Suite: nvme_io_msg 00:02:38.238 Test: test_nvme_io_msg_send ...passed 00:02:38.238 Test: test_nvme_io_msg_process ...passed 00:02:38.238 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:02:38.238 00:02:38.238 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.238 suites 1 1 n/a 0 0 00:02:38.238 tests 3 3 3 0 0 00:02:38.238 asserts 56 56 56 0 n/a 00:02:38.238 00:02:38.238 Elapsed time = 0.000 seconds 00:02:38.238 13:26:17 -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:02:38.238 00:02:38.238 00:02:38.238 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.238 http://cunit.sourceforge.net/ 00:02:38.238 00:02:38.238 00:02:38.238 Suite: nvme_pcie_common 00:02:38.238 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-10 13:26:17.554418] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:02:38.238 passed 00:02:38.238 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:02:38.238 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:02:38.238 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-10 13:26:17.554945] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:02:38.238 [2024-07-10 13:26:17.554997] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:02:38.238 passed 00:02:38.238 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-10 13:26:17.555037] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:02:38.238 passed 00:02:38.238 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:02:38.238 00:02:38.238 [2024-07-10 13:26:17.555248] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:38.238 [2024-07-10 13:26:17.555275] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:38.238 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.238 suites 1 1 n/a 0 0 00:02:38.238 tests 6 6 6 0 0 00:02:38.238 asserts 148 148 148 0 n/a 00:02:38.238 00:02:38.238 Elapsed time = 0.000 seconds 00:02:38.238 13:26:17 -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:02:38.238 00:02:38.238 00:02:38.238 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.238 http://cunit.sourceforge.net/ 00:02:38.238 00:02:38.238 00:02:38.238 Suite: nvme_fabric 00:02:38.238 Test: test_nvme_fabric_prop_set_cmd ...passed 00:02:38.238 Test: test_nvme_fabric_prop_get_cmd ...passed 00:02:38.238 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:02:38.238 Test: test_nvme_fabric_discover_probe ...passed 00:02:38.238 Test: test_nvme_fabric_qpair_connect ...passed 00:02:38.238 00:02:38.238 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.238 suites 1 1 n/a 0 0 00:02:38.238 tests 5 5 5 0 0 00:02:38.238 asserts 60 60 60 0 n/a 00:02:38.238 00:02:38.238 Elapsed time = 0.000 seconds 00:02:38.238 [2024-07-10 13:26:17.560967] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 605:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:02:38.238 13:26:17 -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:02:38.238 00:02:38.238 00:02:38.238 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.238 http://cunit.sourceforge.net/ 00:02:38.238 00:02:38.238 00:02:38.238 Suite: nvme_opal 00:02:38.238 Test: test_opal_nvme_security_recv_send_done ...passed 00:02:38.238 Test: test_opal_add_short_atom_header ...[2024-07-10 13:26:17.569247] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:02:38.238 passed 00:02:38.238 00:02:38.238 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.238 suites 1 1 n/a 0 0 00:02:38.238 tests 2 2 2 0 0 00:02:38.238 asserts 22 22 22 0 n/a 00:02:38.238 00:02:38.238 Elapsed time = 0.000 seconds 00:02:38.238 00:02:38.238 real 0m0.490s 00:02:38.238 user 0m0.118s 00:02:38.238 sys 0m0.125s 00:02:38.238 13:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.238 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:02:38.238 ************************************ 00:02:38.238 END TEST unittest_nvme 00:02:38.238 ************************************ 00:02:38.497 13:26:17 -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:02:38.497 13:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:38.497 13:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:38.497 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:02:38.497 ************************************ 00:02:38.497 START TEST unittest_log 00:02:38.497 ************************************ 00:02:38.497 13:26:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:02:38.497 00:02:38.497 00:02:38.497 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.497 http://cunit.sourceforge.net/ 00:02:38.497 00:02:38.497 00:02:38.497 Suite: log 00:02:38.497 Test: log_test ...[2024-07-10 13:26:17.624629] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:02:38.497 [2024-07-10 13:26:17.624806] log_ut.c: 55:log_test: *DEBUG*: log test 00:02:38.497 log dump test: 00:02:38.497 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:02:38.497 spdk dump test: 00:02:38.497 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:02:38.497 spdk dump test: 00:02:38.497 passed 00:02:38.497 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:02:38.497 00000010 65 20 63 68 61 72 73 e chars 00:02:39.429 passed 00:02:39.429 00:02:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.429 suites 1 1 n/a 0 0 00:02:39.429 tests 2 2 2 0 0 00:02:39.429 asserts 73 73 73 0 n/a 00:02:39.429 00:02:39.429 Elapsed time = 0.000 seconds 00:02:39.429 00:02:39.429 real 0m1.069s 00:02:39.429 user 0m0.001s 00:02:39.429 sys 0m0.004s 00:02:39.429 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.429 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.429 ************************************ 00:02:39.429 END TEST unittest_log 00:02:39.429 ************************************ 00:02:39.429 13:26:18 -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:02:39.429 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.429 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.429 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.429 ************************************ 00:02:39.429 START TEST unittest_lvol 00:02:39.429 ************************************ 00:02:39.429 13:26:18 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:02:39.429 00:02:39.429 00:02:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.429 http://cunit.sourceforge.net/ 00:02:39.429 00:02:39.429 00:02:39.429 Suite: lvol 00:02:39.429 Test: lvs_init_unload_success ...[2024-07-10 13:26:18.751003] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:02:39.429 passed 00:02:39.429 Test: lvs_init_destroy_success ...[2024-07-10 13:26:18.751477] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:02:39.430 passed 00:02:39.430 Test: lvs_init_opts_success ...passed 00:02:39.430 Test: lvs_unload_lvs_is_null_fail ...[2024-07-10 13:26:18.751573] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:02:39.430 passed 00:02:39.430 Test: lvs_names ...[2024-07-10 13:26:18.751603] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:02:39.430 [2024-07-10 13:26:18.751625] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:02:39.430 [2024-07-10 13:26:18.751683] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_create_destroy_success ...passed 00:02:39.430 Test: lvol_create_fail ...[2024-07-10 13:26:18.751816] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:02:39.430 [2024-07-10 13:26:18.751843] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:02:39.430 passed 00:02:39.430 Test: lvol_destroy_fail ...passed[2024-07-10 13:26:18.751924] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:02:39.430 00:02:39.430 Test: lvol_close ...[2024-07-10 13:26:18.752005] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:02:39.430 [2024-07-10 13:26:18.752029] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:02:39.430 passed 00:02:39.430 Test: lvol_resize ...passed 00:02:39.430 Test: lvol_set_read_only ...passed 00:02:39.430 Test: test_lvs_load ...[2024-07-10 13:26:18.752157] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:02:39.430 passed 00:02:39.430 Test: lvols_load ...[2024-07-10 13:26:18.752186] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:02:39.430 [2024-07-10 13:26:18.752248] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:02:39.430 [2024-07-10 13:26:18.752312] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:02:39.430 passed 00:02:39.430 Test: lvol_open ...passed 00:02:39.430 Test: lvol_snapshot ...passed 00:02:39.430 Test: lvol_snapshot_fail ...passed 00:02:39.430 Test: lvol_clone ...passed 00:02:39.430 Test: lvol_clone_fail ...[2024-07-10 13:26:18.752555] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_iter_clones ...[2024-07-10 13:26:18.752660] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_refcnt ...[2024-07-10 13:26:18.752780] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol fdae9634-3ebf-11ef-b9c4-5b09e08d4792 because it is still open 00:02:39.430 passed 00:02:39.430 Test: lvol_names ...[2024-07-10 13:26:18.752849] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:02:39.430 [2024-07-10 13:26:18.752878] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:02:39.430 [2024-07-10 13:26:18.752932] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:02:39.430 passed 00:02:39.430 Test: lvol_create_thin_provisioned ...passed 00:02:39.430 Test: lvol_rename ...[2024-07-10 13:26:18.753021] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:02:39.430 passed 00:02:39.430 Test: lvs_rename ...[2024-07-10 13:26:18.753052] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:02:39.430 passed 00:02:39.430 Test: lvol_inflate ...[2024-07-10 13:26:18.753106] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:02:39.430 passed 00:02:39.430 Test: lvol_decouple_parent ...[2024-07-10 13:26:18.753184] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:02:39.430 passed 00:02:39.430 Test: lvol_get_xattr ...[2024-07-10 13:26:18.753241] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:02:39.430 passed 00:02:39.430 Test: lvol_esnap_reload ...passed 00:02:39.430 Test: lvol_esnap_create_bad_args ...[2024-07-10 13:26:18.753336] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:02:39.430 [2024-07-10 13:26:18.753357] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:02:39.430 [2024-07-10 13:26:18.753379] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:02:39.430 [2024-07-10 13:26:18.753417] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_esnap_create_delete ...[2024-07-10 13:26:18.753476] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_esnap_load_esnaps ...[2024-07-10 13:26:18.753581] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:02:39.430 passed 00:02:39.430 Test: lvol_esnap_missing ...[2024-07-10 13:26:18.753649] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:02:39.430 [2024-07-10 13:26:18.753685] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:02:39.430 passed 00:02:39.430 Test: lvol_esnap_hotplug ... 00:02:39.430 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:02:39.430 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:02:39.430 [2024-07-10 13:26:18.753919] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol fdaec294-3ebf-11ef-b9c4-5b09e08d4792: failed to create esnap bs_dev: error -12 00:02:39.430 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:02:39.430 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:02:39.430 [2024-07-10 13:26:18.754047] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol fdaec757-3ebf-11ef-b9c4-5b09e08d4792: failed to create esnap bs_dev: error -12 00:02:39.430 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:02:39.430 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:02:39.430 [2024-07-10 13:26:18.754120] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol fdaeca79-3ebf-11ef-b9c4-5b09e08d4792: failed to create esnap bs_dev: error -12 00:02:39.430 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:02:39.430 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:02:39.430 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:02:39.430 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:02:39.430 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:02:39.430 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:02:39.430 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:02:39.430 passed 00:02:39.430 Test: lvol_get_by ...passed 00:02:39.430 00:02:39.430 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.430 suites 1 1 n/a 0 0 00:02:39.430 tests 34 34 34 0 0 00:02:39.430 asserts 1439 1439 1439 0 n/a 00:02:39.430 00:02:39.430 Elapsed time = 0.008 seconds 00:02:39.430 00:02:39.430 real 0m0.016s 00:02:39.430 user 0m0.008s 00:02:39.430 sys 0m0.011s 00:02:39.430 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.430 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.430 ************************************ 00:02:39.430 END TEST unittest_lvol 00:02:39.430 ************************************ 00:02:39.692 13:26:18 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:39.692 13:26:18 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:02:39.692 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.692 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.692 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.692 ************************************ 00:02:39.692 START TEST unittest_nvme_rdma 00:02:39.692 ************************************ 00:02:39.692 13:26:18 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:02:39.692 00:02:39.692 00:02:39.692 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.692 http://cunit.sourceforge.net/ 00:02:39.692 00:02:39.692 00:02:39.692 Suite: nvme_rdma 00:02:39.692 Test: test_nvme_rdma_build_sgl_request ...[2024-07-10 13:26:18.811946] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:02:39.692 [2024-07-10 13:26:18.812099] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1629:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:02:39.693 [2024-07-10 13:26:18.812113] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1685:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:02:39.693 Test: test_nvme_rdma_build_contig_request ...[2024-07-10 13:26:18.812126] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1566:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:02:39.693 Test: test_nvme_rdma_create_reqs ...passed 00:02:39.693 Test: test_nvme_rdma_create_rsps ...[2024-07-10 13:26:18.812141] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-10 13:26:18.812167] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:02:39.693 [2024-07-10 13:26:18.812220] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_poller_create ...[2024-07-10 13:26:18.812235] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:02:39.693 Test: test_nvme_rdma_ctrlr_construct ...passed 00:02:39.693 Test: test_nvme_rdma_req_put_and_get ...passed 00:02:39.693 Test: test_nvme_rdma_req_init ...passed[2024-07-10 13:26:18.812257] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:02:39.693 00:02:39.693 Test: test_nvme_rdma_validate_cm_event ...[2024-07-10 13:26:18.812292] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_qpair_init ...passed 00:02:39.693 Test: test_nvme_rdma_qpair_submit_request ...passed 00:02:39.693 Test: test_nvme_rdma_memory_domain ...[2024-07-10 13:26:18.812299] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:02:39.693 [2024-07-10 13:26:18.812320] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:02:39.693 passed 00:02:39.693 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:02:39.693 Test: test_rdma_get_memory_translation ...passed 00:02:39.693 Test: test_get_rdma_qpair_from_wc ...passed 00:02:39.693 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:02:39.693 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-10 13:26:18.812352] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:02:39.693 [2024-07-10 13:26:18.812359] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:02:39.693 [2024-07-10 13:26:18.812373] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:39.693 [2024-07-10 13:26:18.812379] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:02:39.693 passed 00:02:39.693 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-10 13:26:18.812395] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:02:39.693 [2024-07-10 13:26:18.812401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:02:39.693 [2024-07-10 13:26:18.812407] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8205311e0 on poll group 0x82b6ce000 00:02:39.693 [2024-07-10 13:26:18.812413] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:02:39.693 [2024-07-10 13:26:18.812419] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:02:39.693 [2024-07-10 13:26:18.812425] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8205311e0 on poll group 0x82b6ce000 00:02:39.693 passed 00:02:39.693 00:02:39.693 [2024-07-10 13:26:18.812454] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:02:39.693 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.693 suites 1 1 n/a 0 0 00:02:39.693 tests 22 22 22 0 0 00:02:39.693 asserts 412 412 412 0 n/a 00:02:39.693 00:02:39.693 Elapsed time = 0.000 seconds 00:02:39.693 00:02:39.693 real 0m0.005s 00:02:39.693 user 0m0.000s 00:02:39.693 sys 0m0.008s 00:02:39.693 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 END TEST unittest_nvme_rdma 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:02:39.693 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.693 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 START TEST unittest_nvmf_transport 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:02:39.693 00:02:39.693 00:02:39.693 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.693 http://cunit.sourceforge.net/ 00:02:39.693 00:02:39.693 00:02:39.693 Suite: nvmf 00:02:39.693 Test: test_spdk_nvmf_transport_create ...[2024-07-10 13:26:18.856437] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:02:39.693 [2024-07-10 13:26:18.856859] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:02:39.693 [2024-07-10 13:26:18.856939] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 272:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:02:39.693 [2024-07-10 13:26:18.856994] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 255:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:02:39.693 passed 00:02:39.693 Test: test_nvmf_transport_poll_group_create ...passed 00:02:39.693 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-10 13:26:18.857062] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:02:39.693 [2024-07-10 13:26:18.857099] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:02:39.693 [2024-07-10 13:26:18.857132] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:02:39.693 passed 00:02:39.693 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:02:39.693 00:02:39.693 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.693 suites 1 1 n/a 0 0 00:02:39.693 tests 4 4 4 0 0 00:02:39.693 asserts 49 49 49 0 n/a 00:02:39.693 00:02:39.693 Elapsed time = 0.000 seconds 00:02:39.693 00:02:39.693 real 0m0.009s 00:02:39.693 user 0m0.001s 00:02:39.693 sys 0m0.008s 00:02:39.693 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 END TEST unittest_nvmf_transport 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:02:39.693 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.693 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 START TEST unittest_rdma 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:02:39.693 00:02:39.693 00:02:39.693 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.693 http://cunit.sourceforge.net/ 00:02:39.693 00:02:39.693 00:02:39.693 Suite: rdma_common 00:02:39.693 Test: test_spdk_rdma_pd ...[2024-07-10 13:26:18.908957] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:02:39.693 passed 00:02:39.693 00:02:39.693 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.693 suites 1 1 n/a 0 0 00:02:39.693 tests 1 1 1 0 0 00:02:39.693 asserts 31 31 31 0 n/a 00:02:39.693 00:02:39.693 Elapsed time = 0.000 seconds 00:02:39.693 [2024-07-10 13:26:18.909355] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:02:39.693 00:02:39.693 real 0m0.009s 00:02:39.693 user 0m0.001s 00:02:39.693 sys 0m0.008s 00:02:39.693 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 END TEST unittest_rdma 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:39.693 13:26:18 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:02:39.693 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.693 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.693 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:02:39.693 ************************************ 00:02:39.693 START TEST unittest_nvmf 00:02:39.693 ************************************ 00:02:39.693 13:26:18 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:02:39.693 13:26:18 -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:02:39.693 00:02:39.693 00:02:39.693 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.693 http://cunit.sourceforge.net/ 00:02:39.693 00:02:39.693 00:02:39.693 Suite: nvmf 00:02:39.693 Test: test_get_log_page ...[2024-07-10 13:26:18.966800] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:02:39.693 passed 00:02:39.693 Test: test_process_fabrics_cmd ...passed 00:02:39.693 Test: test_connect ...[2024-07-10 13:26:18.967429] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:02:39.693 [2024-07-10 13:26:18.967486] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:02:39.694 [2024-07-10 13:26:18.967511] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:02:39.694 [2024-07-10 13:26:18.967534] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:02:39.694 [2024-07-10 13:26:18.967555] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:02:39.694 [2024-07-10 13:26:18.967598] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 787:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:02:39.694 [2024-07-10 13:26:18.967619] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 793:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:02:39.694 [2024-07-10 13:26:18.967640] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:02:39.694 [2024-07-10 13:26:18.967689] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:02:39.694 [2024-07-10 13:26:18.967714] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:02:39.694 [2024-07-10 13:26:18.967765] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:02:39.694 [2024-07-10 13:26:18.967815] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 600:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:02:39.694 [2024-07-10 13:26:18.967840] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 607:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:02:39.694 [2024-07-10 13:26:18.967865] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 624:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:02:39.694 [2024-07-10 13:26:18.967911] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:02:39.694 [2024-07-10 13:26:18.967946] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group 0x0) 00:02:39.694 passed 00:02:39.694 Test: test_get_ns_id_desc_list ...passed 00:02:39.694 Test: test_identify_ns ...[2024-07-10 13:26:18.968052] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:02:39.694 [2024-07-10 13:26:18.968157] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:02:39.694 [2024-07-10 13:26:18.968217] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:02:39.694 passed 00:02:39.694 Test: test_identify_ns_iocs_specific ...[2024-07-10 13:26:18.968269] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:02:39.694 [2024-07-10 13:26:18.968390] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:02:39.694 passed 00:02:39.694 Test: test_reservation_write_exclusive ...passed 00:02:39.694 Test: test_reservation_exclusive_access ...passed 00:02:39.694 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:02:39.694 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:02:39.694 Test: test_reservation_notification_log_page ...passed 00:02:39.694 Test: test_get_dif_ctx ...passed 00:02:39.694 Test: test_set_get_features ...[2024-07-10 13:26:18.968601] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:02:39.694 [2024-07-10 13:26:18.968641] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:02:39.694 [2024-07-10 13:26:18.968660] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:02:39.694 passed 00:02:39.694 Test: test_identify_ctrlr ...[2024-07-10 13:26:18.968699] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:02:39.694 passed 00:02:39.694 Test: test_identify_ctrlr_iocs_specific ...passed 00:02:39.694 Test: test_custom_admin_cmd ...passed 00:02:39.694 Test: test_fused_compare_and_write ...[2024-07-10 13:26:18.968919] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:02:39.694 passed 00:02:39.694 Test: test_multi_async_event_reqs ...[2024-07-10 13:26:18.968942] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:02:39.694 [2024-07-10 13:26:18.968963] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:02:39.694 passed 00:02:39.694 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:02:39.694 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:02:39.694 Test: test_multi_async_events ...passed 00:02:39.694 Test: test_rae ...passed 00:02:39.694 Test: test_nvmf_ctrlr_create_destruct ...passed 00:02:39.694 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:02:39.694 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-10 13:26:18.969139] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:02:39.694 passed 00:02:39.694 Test: test_zcopy_read ...passed 00:02:39.694 Test: test_zcopy_write ...passed 00:02:39.694 Test: test_nvmf_property_set ...passed 00:02:39.694 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-10 13:26:18.969220] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:02:39.694 [2024-07-10 13:26:18.969242] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:02:39.694 passed 00:02:39.694 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-10 13:26:18.969282] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:02:39.694 [2024-07-10 13:26:18.969301] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:02:39.694 passed 00:02:39.694 00:02:39.694 [2024-07-10 13:26:18.969347] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:02:39.694 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.694 suites 1 1 n/a 0 0 00:02:39.694 tests 30 30 30 0 0 00:02:39.694 asserts 885 885 885 0 n/a 00:02:39.694 00:02:39.694 Elapsed time = 0.008 seconds 00:02:39.694 13:26:18 -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:02:39.694 00:02:39.694 00:02:39.694 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.694 http://cunit.sourceforge.net/ 00:02:39.694 00:02:39.694 00:02:39.694 Suite: nvmf 00:02:39.694 Test: test_get_rw_params ...passed 00:02:39.694 Test: test_lba_in_range ...passed 00:02:39.694 Test: test_get_dif_ctx ...passed 00:02:39.694 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:02:39.694 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-10 13:26:18.980624] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:02:39.694 [2024-07-10 13:26:18.980994] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:02:39.694 passed 00:02:39.694 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-10 13:26:18.981040] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 451:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:02:39.694 [2024-07-10 13:26:18.981070] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:02:39.694 [2024-07-10 13:26:18.981102] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 954:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:02:39.694 passed 00:02:39.694 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-10 13:26:18.981140] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:02:39.694 [2024-07-10 13:26:18.981162] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 397:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:02:39.694 [2024-07-10 13:26:18.981185] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:02:39.694 [2024-07-10 13:26:18.981205] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:02:39.694 passed 00:02:39.694 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:02:39.694 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:02:39.694 00:02:39.694 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.694 suites 1 1 n/a 0 0 00:02:39.694 tests 9 9 9 0 0 00:02:39.694 asserts 157 157 157 0 n/a 00:02:39.694 00:02:39.694 Elapsed time = 0.008 seconds 00:02:39.694 13:26:18 -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:02:39.694 00:02:39.694 00:02:39.694 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.694 http://cunit.sourceforge.net/ 00:02:39.694 00:02:39.694 00:02:39.694 Suite: nvmf 00:02:39.694 Test: test_discovery_log ...passed 00:02:39.694 Test: test_discovery_log_with_filters ...passed 00:02:39.694 00:02:39.694 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.694 suites 1 1 n/a 0 0 00:02:39.694 tests 2 2 2 0 0 00:02:39.694 asserts 238 238 238 0 n/a 00:02:39.694 00:02:39.694 Elapsed time = 0.000 seconds 00:02:39.694 13:26:18 -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:02:39.694 00:02:39.694 00:02:39.694 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.694 http://cunit.sourceforge.net/ 00:02:39.694 00:02:39.694 00:02:39.694 Suite: nvmf 00:02:39.694 Test: nvmf_test_create_subsystem ...[2024-07-10 13:26:18.999515] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:02:39.694 [2024-07-10 13:26:18.999899] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:02:39.694 [2024-07-10 13:26:18.999945] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:02:39.694 [2024-07-10 13:26:18.999969] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:02:39.694 [2024-07-10 13:26:19.000000] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:02:39.694 [2024-07-10 13:26:19.000030] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:02:39.694 [2024-07-10 13:26:19.000064] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:02:39.694 [2024-07-10 13:26:19.000130] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:02:39.695 [2024-07-10 13:26:19.000169] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:02:39.695 [2024-07-10 13:26:19.000192] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:02:39.695 [2024-07-10 13:26:19.000216] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:02:39.695 passed 00:02:39.695 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-10 13:26:19.000322] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:02:39.695 [2024-07-10 13:26:19.000360] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:02:39.695 passed 00:02:39.695 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:02:39.695 Test: test_reservation_register ...[2024-07-10 13:26:19.000438] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 [2024-07-10 13:26:19.000486] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:02:39.695 passed 00:02:39.695 Test: test_reservation_register_with_ptpl ...passed 00:02:39.695 Test: test_reservation_acquire_preempt_1 ...passed 00:02:39.695 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-10 13:26:19.000848] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_release ...[2024-07-10 13:26:19.001154] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_unregister_notification ...[2024-07-10 13:26:19.001199] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_release_notification ...[2024-07-10 13:26:19.001236] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_release_notification_write_exclusive ...[2024-07-10 13:26:19.001270] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_clear_notification ...[2024-07-10 13:26:19.001303] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_reservation_preempt_notification ...[2024-07-10 13:26:19.001335] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2825:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:02:39.695 passed 00:02:39.695 Test: test_spdk_nvmf_ns_event ...passed 00:02:39.695 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:02:39.695 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:02:39.695 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-10 13:26:19.001530] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 261:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:02:39.695 [2024-07-10 13:26:19.001569] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_ns_reservation_report ...[2024-07-10 13:26:19.001604] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3187:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_nqn_is_valid ...[2024-07-10 13:26:19.001656] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:02:39.695 [2024-07-10 13:26:19.001710] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:fdd48f7b-3ebf-11ef-b9c4-5b09e08d479": uuid is not the correct length 00:02:39.695 [2024-07-10 13:26:19.001742] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_ns_reservation_restore ...[2024-07-10 13:26:19.001804] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_subsystem_state_change ...passed 00:02:39.695 Test: test_nvmf_reservation_custom_ops ...passed 00:02:39.695 00:02:39.695 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.695 suites 1 1 n/a 0 0 00:02:39.695 tests 22 22 22 0 0 00:02:39.695 asserts 407 407 407 0 n/a 00:02:39.695 00:02:39.695 Elapsed time = 0.008 seconds 00:02:39.695 13:26:19 -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:02:39.695 00:02:39.695 00:02:39.695 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.695 http://cunit.sourceforge.net/ 00:02:39.695 00:02:39.695 00:02:39.695 Suite: nvmf 00:02:39.695 Test: test_nvmf_tcp_create ...[2024-07-10 13:26:19.018176] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_tcp_destroy ...passed 00:02:39.695 Test: test_nvmf_tcp_poll_group_create ...passed 00:02:39.695 Test: test_nvmf_tcp_send_c2h_data ...passed 00:02:39.695 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:02:39.695 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:02:39.695 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:02:39.695 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-10 13:26:19.032923] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-07-10 13:26:19.032971] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.032987] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033001] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033014] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_tcp_icreq_handle ...[2024-07-10 13:26:19.033058] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:02:39.695 [2024-07-10 13:26:19.033085] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033098] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b476b8 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033110] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:02:39.695 [2024-07-10 13:26:19.033123] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b476b8 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033136] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033148] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b476b8 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033163] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_tcp_check_xfer_type ...passed 00:02:39.695 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-10 13:26:19.033175] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b476b8 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033199] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2485:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:02:39.695 [2024-07-10 13:26:19.033212] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033224] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b476b8 is same with the state(5) to be set 00:02:39.695 passed 00:02:39.695 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-10 13:26:19.033252] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820b46f30 00:02:39.695 [2024-07-10 13:26:19.033267] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033281] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033307] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820b477a0 00:02:39.695 [2024-07-10 13:26:19.033321] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033335] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033347] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:02:39.695 [2024-07-10 13:26:19.033371] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033383] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033396] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:02:39.695 [2024-07-10 13:26:19.033409] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033417] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033442] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033450] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033458] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033465] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033473] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033481] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.695 [2024-07-10 13:26:19.033489] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.695 [2024-07-10 13:26:19.033496] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.696 [2024-07-10 13:26:19.033504] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.696 passed 00:02:39.696 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-10 13:26:19.033511] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.696 [2024-07-10 13:26:19.033519] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:02:39.696 [2024-07-10 13:26:19.033526] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b477a0 is same with the state(5) to be set 00:02:39.696 passed 00:02:39.696 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-10 13:26:19.041714] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:02:39.696 passed 00:02:39.696 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-10 13:26:19.041789] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:02:39.696 [2024-07-10 13:26:19.042231] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:02:39.696 passed 00:02:39.696 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-10 13:26:19.042292] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:02:39.696 [2024-07-10 13:26:19.042580] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:02:39.696 [2024-07-10 13:26:19.042636] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:02:39.696 passed 00:02:39.696 00:02:39.696 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.696 suites 1 1 n/a 0 0 00:02:39.696 tests 17 17 17 0 0 00:02:39.696 asserts 222 222 222 0 n/a 00:02:39.696 00:02:39.696 Elapsed time = 0.031 seconds 00:02:39.696 13:26:19 -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:02:39.984 00:02:39.984 00:02:39.984 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.984 http://cunit.sourceforge.net/ 00:02:39.984 00:02:39.984 00:02:39.984 Suite: nvmf 00:02:39.984 Test: test_nvmf_tgt_create_poll_group ...passed 00:02:39.984 00:02:39.984 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.984 suites 1 1 n/a 0 0 00:02:39.984 tests 1 1 1 0 0 00:02:39.984 asserts 17 17 17 0 n/a 00:02:39.984 00:02:39.984 Elapsed time = 0.000 seconds 00:02:39.984 00:02:39.984 real 0m0.105s 00:02:39.984 user 0m0.050s 00:02:39.984 sys 0m0.058s 00:02:39.984 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.984 ************************************ 00:02:39.984 END TEST unittest_nvmf 00:02:39.984 ************************************ 00:02:39.984 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.984 13:26:19 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:39.984 13:26:19 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:39.984 13:26:19 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:02:39.984 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.984 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.984 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.984 ************************************ 00:02:39.984 START TEST unittest_nvmf_rdma 00:02:39.984 ************************************ 00:02:39.984 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:02:39.984 00:02:39.984 00:02:39.984 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.984 http://cunit.sourceforge.net/ 00:02:39.984 00:02:39.984 00:02:39.984 Suite: nvmf 00:02:39.984 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-10 13:26:19.119679] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1915:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:02:39.984 [2024-07-10 13:26:19.120197] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1965:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:02:39.984 [2024-07-10 13:26:19.120270] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1965:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:02:39.984 passed 00:02:39.984 Test: test_spdk_nvmf_rdma_request_process ...passed 00:02:39.984 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:02:39.984 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:02:39.984 Test: test_nvmf_rdma_opts_init ...passed 00:02:39.984 Test: test_nvmf_rdma_request_free_data ...passed 00:02:39.984 Test: test_nvmf_rdma_update_ibv_state ...[2024-07-10 13:26:19.120722] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:02:39.984 [2024-07-10 13:26:19.120770] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:02:39.984 passed 00:02:39.984 Test: test_nvmf_rdma_resources_create ...passed 00:02:39.984 Test: test_nvmf_rdma_qpair_compare ...passed 00:02:39.984 Test: test_nvmf_rdma_resize_cq ...[2024-07-10 13:26:19.122179] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1007:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:02:39.984 Using CQ of insufficient size may lead to CQ overrun 00:02:39.984 [2024-07-10 13:26:19.122239] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1012:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:02:39.985 [2024-07-10 13:26:19.122365] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:02:39.985 passed 00:02:39.985 00:02:39.985 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.985 suites 1 1 n/a 0 0 00:02:39.985 tests 10 10 10 0 0 00:02:39.985 asserts 584 584 584 0 n/a 00:02:39.985 00:02:39.985 Elapsed time = 0.008 seconds 00:02:39.985 00:02:39.985 real 0m0.013s 00:02:39.985 user 0m0.000s 00:02:39.985 sys 0m0.016s 00:02:39.985 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.985 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.985 ************************************ 00:02:39.985 END TEST unittest_nvmf_rdma 00:02:39.985 ************************************ 00:02:39.985 13:26:19 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:39.985 13:26:19 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:02:39.985 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.985 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.985 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.985 ************************************ 00:02:39.985 START TEST unittest_scsi 00:02:39.985 ************************************ 00:02:39.985 13:26:19 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:02:39.985 13:26:19 -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:02:39.985 00:02:39.985 00:02:39.985 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.985 http://cunit.sourceforge.net/ 00:02:39.985 00:02:39.985 00:02:39.985 Suite: dev_suite 00:02:39.985 Test: dev_destruct_null_dev ...passed 00:02:39.985 Test: dev_destruct_zero_luns ...passed 00:02:39.985 Test: dev_destruct_null_lun ...passed 00:02:39.985 Test: dev_destruct_success ...passed 00:02:39.985 Test: dev_construct_num_luns_zero ...[2024-07-10 13:26:19.182853] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:02:39.985 passed 00:02:39.985 Test: dev_construct_no_lun_zero ...[2024-07-10 13:26:19.183300] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:02:39.985 passed 00:02:39.985 Test: dev_construct_null_lun ...[2024-07-10 13:26:19.183335] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:02:39.985 passed 00:02:39.985 Test: dev_construct_name_too_long ...passed 00:02:39.985 Test: dev_construct_success ...[2024-07-10 13:26:19.183363] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:02:39.985 passed 00:02:39.985 Test: dev_construct_success_lun_zero_not_first ...passed 00:02:39.985 Test: dev_queue_mgmt_task_success ...passed 00:02:39.985 Test: dev_queue_task_success ...passed 00:02:39.985 Test: dev_stop_success ...passed 00:02:39.985 Test: dev_add_port_max_ports ...passed 00:02:39.985 Test: dev_add_port_construct_failure1 ...[2024-07-10 13:26:19.183441] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:02:39.985 passed 00:02:39.985 Test: dev_add_port_construct_failure2 ...[2024-07-10 13:26:19.183468] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:02:39.985 [2024-07-10 13:26:19.183492] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:02:39.985 passed 00:02:39.985 Test: dev_add_port_success1 ...passed 00:02:39.985 Test: dev_add_port_success2 ...passed 00:02:39.985 Test: dev_add_port_success3 ...passed 00:02:39.985 Test: dev_find_port_by_id_num_ports_zero ...passed 00:02:39.985 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:02:39.985 Test: dev_find_port_by_id_success ...passed 00:02:39.985 Test: dev_add_lun_bdev_not_found ...passed 00:02:39.985 Test: dev_add_lun_no_free_lun_id ...passed 00:02:39.985 Test: dev_add_lun_success1 ...passed 00:02:39.985 Test: dev_add_lun_success2 ...passed 00:02:39.985 Test: dev_check_pending_tasks ...passed 00:02:39.985 Test: dev_iterate_luns ...[2024-07-10 13:26:19.183844] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:02:39.985 passed 00:02:39.985 Test: dev_find_free_lun ...passed 00:02:39.985 00:02:39.985 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.985 suites 1 1 n/a 0 0 00:02:39.985 tests 29 29 29 0 0 00:02:39.985 asserts 97 97 97 0 n/a 00:02:39.985 00:02:39.985 Elapsed time = 0.000 seconds 00:02:39.985 13:26:19 -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:02:39.985 00:02:39.985 00:02:39.985 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.985 http://cunit.sourceforge.net/ 00:02:39.985 00:02:39.985 00:02:39.985 Suite: lun_suite 00:02:39.985 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-10 13:26:19.195042] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:02:39.985 passed 00:02:39.985 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:02:39.985 Test: lun_task_mgmt_execute_lun_reset ...passed 00:02:39.985 Test: lun_task_mgmt_execute_target_reset ...[2024-07-10 13:26:19.195486] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:02:39.985 passed 00:02:39.985 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-10 13:26:19.195534] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:02:39.985 passed 00:02:39.985 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:02:39.985 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:02:39.985 Test: lun_append_task_null_lun_not_supported ...passed 00:02:39.985 Test: lun_execute_scsi_task_pending ...passed 00:02:39.985 Test: lun_execute_scsi_task_complete ...passed 00:02:39.985 Test: lun_execute_scsi_task_resize ...passed 00:02:39.985 Test: lun_destruct_success ...passed 00:02:39.985 Test: lun_construct_null_ctx ...passed 00:02:39.985 Test: lun_construct_success ...passed 00:02:39.985 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-10 13:26:19.195607] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:02:39.985 passed 00:02:39.985 Test: lun_reset_task_suspend_scsi_task ...passed 00:02:39.985 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:02:39.985 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:02:39.985 00:02:39.985 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.985 suites 1 1 n/a 0 0 00:02:39.985 tests 18 18 18 0 0 00:02:39.985 asserts 153 153 153 0 n/a 00:02:39.985 00:02:39.985 Elapsed time = 0.000 seconds 00:02:39.985 13:26:19 -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:02:39.985 00:02:39.985 00:02:39.985 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.985 http://cunit.sourceforge.net/ 00:02:39.985 00:02:39.985 00:02:39.985 Suite: scsi_suite 00:02:39.985 Test: scsi_init ...passed 00:02:39.985 00:02:39.985 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.985 suites 1 1 n/a 0 0 00:02:39.985 tests 1 1 1 0 0 00:02:39.985 asserts 1 1 1 0 n/a 00:02:39.985 00:02:39.985 Elapsed time = 0.000 seconds 00:02:39.985 13:26:19 -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:02:39.985 00:02:39.985 00:02:39.985 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.985 http://cunit.sourceforge.net/ 00:02:39.985 00:02:39.985 00:02:39.985 Suite: translation_suite 00:02:39.985 Test: mode_select_6_test ...passed 00:02:39.985 Test: mode_select_6_test2 ...passed 00:02:39.985 Test: mode_sense_6_test ...passed 00:02:39.985 Test: mode_sense_10_test ...passed 00:02:39.985 Test: inquiry_evpd_test ...passed 00:02:39.985 Test: inquiry_standard_test ...passed 00:02:39.985 Test: inquiry_overflow_test ...passed 00:02:39.985 Test: task_complete_test ...passed 00:02:39.985 Test: lba_range_test ...passed 00:02:39.985 Test: xfer_len_test ...[2024-07-10 13:26:19.210235] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:02:39.985 passed 00:02:39.985 Test: xfer_test ...passed 00:02:39.985 Test: scsi_name_padding_test ...passed 00:02:39.985 Test: get_dif_ctx_test ...passed 00:02:39.985 Test: unmap_split_test ...passed 00:02:39.985 00:02:39.985 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.985 suites 1 1 n/a 0 0 00:02:39.985 tests 14 14 14 0 0 00:02:39.985 asserts 1200 1200 1200 0 n/a 00:02:39.985 00:02:39.985 Elapsed time = 0.000 seconds 00:02:39.985 13:26:19 -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:02:39.985 00:02:39.985 00:02:39.985 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.985 http://cunit.sourceforge.net/ 00:02:39.985 00:02:39.985 00:02:39.985 Suite: reservation_suite 00:02:39.985 Test: test_reservation_register ...passed 00:02:39.985 Test: test_reservation_reserve ...[2024-07-10 13:26:19.214208] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.985 [2024-07-10 13:26:19.214351] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.985 [2024-07-10 13:26:19.214362] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:02:39.985 passed 00:02:39.985 Test: test_reservation_preempt_non_all_regs ...[2024-07-10 13:26:19.214373] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:02:39.985 [2024-07-10 13:26:19.214384] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.985 [2024-07-10 13:26:19.214390] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:02:39.985 passed 00:02:39.985 Test: test_reservation_preempt_all_regs ...passed 00:02:39.985 Test: test_reservation_cmds_conflict ...[2024-07-10 13:26:19.214408] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.985 [2024-07-10 13:26:19.214419] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.986 passed 00:02:39.986 Test: test_scsi2_reserve_release ...passed 00:02:39.986 Test: test_pr_with_scsi2_reserve_release ...[2024-07-10 13:26:19.214435] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:02:39.986 [2024-07-10 13:26:19.214442] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:02:39.986 [2024-07-10 13:26:19.214447] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:02:39.986 [2024-07-10 13:26:19.214453] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:02:39.986 [2024-07-10 13:26:19.214459] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:02:39.986 passed 00:02:39.986 00:02:39.986 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.986 suites 1 1 n/a 0 0 00:02:39.986 tests 7 7 7 0 0 00:02:39.986 asserts 257 257 257 0 n/a 00:02:39.986 00:02:39.986 Elapsed time = 0.000 seconds 00:02:39.986 [2024-07-10 13:26:19.214471] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:02:39.986 00:02:39.986 real 0m0.040s 00:02:39.986 user 0m0.012s 00:02:39.986 sys 0m0.029s 00:02:39.986 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.986 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.986 ************************************ 00:02:39.986 END TEST unittest_scsi 00:02:39.986 ************************************ 00:02:39.986 13:26:19 -- unit/unittest.sh@276 -- # uname -s 00:02:39.986 13:26:19 -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:02:39.986 13:26:19 -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:02:39.986 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.986 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.986 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.986 ************************************ 00:02:39.986 START TEST unittest_thread 00:02:39.986 ************************************ 00:02:39.986 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:02:39.986 00:02:39.986 00:02:39.986 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.986 http://cunit.sourceforge.net/ 00:02:39.986 00:02:39.986 00:02:39.986 Suite: io_channel 00:02:39.986 Test: thread_alloc ...passed 00:02:39.986 Test: thread_send_msg ...passed 00:02:39.986 Test: thread_poller ...passed 00:02:39.986 Test: poller_pause ...passed 00:02:39.986 Test: thread_for_each ...passed 00:02:39.986 Test: for_each_channel_remove ...passed 00:02:39.986 Test: for_each_channel_unreg ...[2024-07-10 13:26:19.278411] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x820fc4cc4 already registered (old:0x82c1a8000 new:0x82c1a8180) 00:02:39.986 passed 00:02:39.986 Test: thread_name ...passed 00:02:39.986 Test: channel ...[2024-07-10 13:26:19.279473] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x226918 00:02:39.986 passed 00:02:39.986 Test: channel_destroy_races ...passed 00:02:39.986 Test: thread_exit_test ...[2024-07-10 13:26:19.280616] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 630:thread_exit: *ERROR*: thread 0x82c16da80 got timeout, and move it to the exited state forcefully 00:02:39.986 passed 00:02:39.986 Test: thread_update_stats_test ...passed 00:02:39.986 Test: nested_channel ...passed 00:02:39.986 Test: device_unregister_and_thread_exit_race ...passed 00:02:39.986 Test: cache_closest_timed_poller ...passed 00:02:39.986 Test: multi_timed_pollers_have_same_expiration ...passed 00:02:39.986 Test: io_device_lookup ...passed 00:02:39.986 Test: spdk_spin ...[2024-07-10 13:26:19.283246] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:02:39.986 [2024-07-10 13:26:19.283296] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820fc4cc0 00:02:39.986 [2024-07-10 13:26:19.283333] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:02:39.986 [2024-07-10 13:26:19.283685] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:02:39.986 [2024-07-10 13:26:19.283731] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820fc4cc0 00:02:39.986 [2024-07-10 13:26:19.283762] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:02:39.986 [2024-07-10 13:26:19.283793] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820fc4cc0 00:02:39.986 [2024-07-10 13:26:19.283824] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:02:39.986 [2024-07-10 13:26:19.283852] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820fc4cc0 00:02:39.986 [2024-07-10 13:26:19.283884] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:02:39.986 [2024-07-10 13:26:19.283913] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820fc4cc0 00:02:39.986 passed 00:02:39.986 Test: for_each_channel_and_thread_exit_race ...passed 00:02:39.986 Test: for_each_thread_and_thread_exit_race ...passed 00:02:39.986 00:02:39.986 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.986 suites 1 1 n/a 0 0 00:02:39.986 tests 20 20 20 0 0 00:02:39.986 asserts 409 409 409 0 n/a 00:02:39.986 00:02:39.986 Elapsed time = 0.016 seconds 00:02:39.986 00:02:39.986 real 0m0.021s 00:02:39.986 user 0m0.018s 00:02:39.986 sys 0m0.007s 00:02:39.986 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.986 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.986 ************************************ 00:02:39.986 END TEST unittest_thread 00:02:39.986 ************************************ 00:02:39.986 13:26:19 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:02:39.986 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:39.986 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:39.986 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:39.986 ************************************ 00:02:39.986 START TEST unittest_iobuf 00:02:39.986 ************************************ 00:02:39.986 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:02:39.986 00:02:39.986 00:02:39.986 CUnit - A unit testing framework for C - Version 2.1-3 00:02:39.986 http://cunit.sourceforge.net/ 00:02:39.986 00:02:39.986 00:02:39.986 Suite: io_channel 00:02:39.986 Test: iobuf ...passed 00:02:39.986 Test: iobuf_cache ...[2024-07-10 13:26:19.343246] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:02:39.986 [2024-07-10 13:26:19.343681] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:02:39.986 [2024-07-10 13:26:19.343772] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 316:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:02:39.986 [2024-07-10 13:26:19.343811] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 318:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:02:39.986 [2024-07-10 13:26:19.343860] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:02:39.986 [2024-07-10 13:26:19.343894] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:02:39.986 passed 00:02:39.986 00:02:39.986 Run Summary: Type Total Ran Passed Failed Inactive 00:02:39.986 suites 1 1 n/a 0 0 00:02:39.986 tests 2 2 2 0 0 00:02:39.986 asserts 107 107 107 0 n/a 00:02:39.986 00:02:39.986 Elapsed time = 0.008 seconds 00:02:39.986 00:02:39.986 real 0m0.011s 00:02:39.986 user 0m0.009s 00:02:39.986 sys 0m0.008s 00:02:39.986 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.986 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.248 ************************************ 00:02:40.248 END TEST unittest_iobuf 00:02:40.248 ************************************ 00:02:40.248 13:26:19 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:02:40.248 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.248 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.248 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.248 ************************************ 00:02:40.248 START TEST unittest_util 00:02:40.248 ************************************ 00:02:40.248 13:26:19 -- common/autotest_common.sh@1104 -- # unittest_util 00:02:40.248 13:26:19 -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: base64 00:02:40.248 Test: test_base64_get_encoded_strlen ...passed 00:02:40.248 Test: test_base64_get_decoded_len ...passed 00:02:40.248 Test: test_base64_encode ...passed 00:02:40.248 Test: test_base64_decode ...passed 00:02:40.248 Test: test_base64_urlsafe_encode ...passed 00:02:40.248 Test: test_base64_urlsafe_decode ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 6 6 6 0 0 00:02:40.248 asserts 112 112 112 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: bit_array 00:02:40.248 Test: test_1bit ...passed 00:02:40.248 Test: test_64bit ...passed 00:02:40.248 Test: test_find ...passed 00:02:40.248 Test: test_resize ...passed 00:02:40.248 Test: test_errors ...passed 00:02:40.248 Test: test_count ...passed 00:02:40.248 Test: test_mask_store_load ...passed 00:02:40.248 Test: test_mask_clear ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 8 8 8 0 0 00:02:40.248 asserts 5075 5075 5075 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: cpuset 00:02:40.248 Test: test_cpuset ...passed 00:02:40.248 Test: test_cpuset_parse ...[2024-07-10 13:26:19.419074] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:02:40.248 [2024-07-10 13:26:19.419455] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:02:40.248 [2024-07-10 13:26:19.419499] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:02:40.248 [2024-07-10 13:26:19.419523] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:02:40.248 [2024-07-10 13:26:19.419544] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:02:40.248 [2024-07-10 13:26:19.419564] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:02:40.248 [2024-07-10 13:26:19.419585] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:02:40.248 [2024-07-10 13:26:19.419605] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:02:40.248 passed 00:02:40.248 Test: test_cpuset_fmt ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 3 3 3 0 0 00:02:40.248 asserts 65 65 65 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: crc16 00:02:40.248 Test: test_crc16_t10dif ...passed 00:02:40.248 Test: test_crc16_t10dif_seed ...passed 00:02:40.248 Test: test_crc16_t10dif_copy ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 3 3 3 0 0 00:02:40.248 asserts 5 5 5 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: crc32_ieee 00:02:40.248 Test: test_crc32_ieee ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 1 1 1 0 0 00:02:40.248 asserts 1 1 1 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: crc32c 00:02:40.248 Test: test_crc32c ...passed 00:02:40.248 Test: test_crc32c_nvme ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 2 2 2 0 0 00:02:40.248 asserts 16 16 16 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: crc64 00:02:40.248 Test: test_crc64_nvme ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 1 1 1 0 0 00:02:40.248 asserts 4 4 4 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: string 00:02:40.248 Test: test_parse_ip_addr ...passed 00:02:40.248 Test: test_str_chomp ...passed 00:02:40.248 Test: test_parse_capacity ...passed 00:02:40.248 Test: test_sprintf_append_realloc ...passed 00:02:40.248 Test: test_strtol ...passed 00:02:40.248 Test: test_strtoll ...passed 00:02:40.248 Test: test_strarray ...passed 00:02:40.248 Test: test_strcpy_replace ...passed 00:02:40.248 00:02:40.248 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.248 suites 1 1 n/a 0 0 00:02:40.248 tests 8 8 8 0 0 00:02:40.248 asserts 161 161 161 0 n/a 00:02:40.248 00:02:40.248 Elapsed time = 0.000 seconds 00:02:40.248 13:26:19 -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:02:40.248 00:02:40.248 00:02:40.248 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.248 http://cunit.sourceforge.net/ 00:02:40.248 00:02:40.248 00:02:40.248 Suite: dif 00:02:40.248 Test: dif_generate_and_verify_test ...[2024-07-10 13:26:19.468644] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:02:40.248 [2024-07-10 13:26:19.469126] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:02:40.248 [2024-07-10 13:26:19.469243] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:02:40.248 [2024-07-10 13:26:19.469351] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:02:40.248 [2024-07-10 13:26:19.469460] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:02:40.249 [2024-07-10 13:26:19.469563] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:02:40.249 passed 00:02:40.249 Test: dif_disable_check_test ...[2024-07-10 13:26:19.469938] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:02:40.249 [2024-07-10 13:26:19.470051] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:02:40.249 [2024-07-10 13:26:19.470153] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:02:40.249 passed 00:02:40.249 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-10 13:26:19.470514] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:02:40.249 [2024-07-10 13:26:19.470618] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:02:40.249 [2024-07-10 13:26:19.470724] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:02:40.249 [2024-07-10 13:26:19.470829] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:02:40.249 [2024-07-10 13:26:19.470953] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471103] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471208] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471310] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471412] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471510] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:02:40.249 [2024-07-10 13:26:19.471611] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:02:40.249 passed 00:02:40.249 Test: dif_apptag_mask_test ...[2024-07-10 13:26:19.471718] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:02:40.249 [2024-07-10 13:26:19.471815] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:02:40.249 passed 00:02:40.249 Test: dif_sec_512_md_0_error_test ...[2024-07-10 13:26:19.471877] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:02:40.249 passed 00:02:40.249 Test: dif_sec_4096_md_0_error_test ...[2024-07-10 13:26:19.471911] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:02:40.249 [2024-07-10 13:26:19.471930] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:02:40.249 passed 00:02:40.249 Test: dif_sec_4100_md_128_error_test ...[2024-07-10 13:26:19.471953] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:02:40.249 [2024-07-10 13:26:19.471972] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:02:40.249 passed 00:02:40.249 Test: dif_guard_seed_test ...passed 00:02:40.249 Test: dif_guard_value_test ...passed 00:02:40.249 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:02:40.249 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:02:40.249 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:26:19.484336] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:02:40.249 [2024-07-10 13:26:19.484841] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff21, Actual=fe21 00:02:40.249 [2024-07-10 13:26:19.485321] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.485832] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.486315] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.249 [2024-07-10 13:26:19.486793] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.249 [2024-07-10 13:26:19.487318] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=e0f 00:02:40.249 [2024-07-10 13:26:19.487689] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe21, Actual=e62a 00:02:40.249 [2024-07-10 13:26:19.488053] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:02:40.249 [2024-07-10 13:26:19.488529] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=39574660, Actual=38574660 00:02:40.249 [2024-07-10 13:26:19.489006] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.489481] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.489910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.249 [2024-07-10 13:26:19.490226] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.249 [2024-07-10 13:26:19.490536] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=c6062949 00:02:40.249 [2024-07-10 13:26:19.490772] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574660, Actual=b53a6a23 00:02:40.249 [2024-07-10 13:26:19.491015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.249 [2024-07-10 13:26:19.491328] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.249 [2024-07-10 13:26:19.491636] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.491946] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.492256] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.249 [2024-07-10 13:26:19.492569] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.249 [2024-07-10 13:26:19.492878] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.249 [2024-07-10 13:26:19.493116] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.249 passed 00:02:40.249 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-10 13:26:19.493231] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.249 [2024-07-10 13:26:19.493273] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:02:40.249 [2024-07-10 13:26:19.493314] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.493354] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.493395] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.249 [2024-07-10 13:26:19.493435] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.249 [2024-07-10 13:26:19.493476] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.249 [2024-07-10 13:26:19.493515] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e62a 00:02:40.249 [2024-07-10 13:26:19.493555] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.249 [2024-07-10 13:26:19.493595] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:02:40.249 [2024-07-10 13:26:19.493636] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.493686] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.493727] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.249 [2024-07-10 13:26:19.493767] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.249 [2024-07-10 13:26:19.493807] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.249 [2024-07-10 13:26:19.493847] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b53a6a23 00:02:40.249 [2024-07-10 13:26:19.493887] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.249 [2024-07-10 13:26:19.493927] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.249 [2024-07-10 13:26:19.493967] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.494008] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.249 [2024-07-10 13:26:19.494048] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.249 [2024-07-10 13:26:19.494089] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.494129] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.250 [2024-07-10 13:26:19.494168] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.250 passed 00:02:40.250 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-10 13:26:19.494211] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.250 [2024-07-10 13:26:19.494252] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:02:40.250 [2024-07-10 13:26:19.494293] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.494452] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.494499] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.494540] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.494584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.250 [2024-07-10 13:26:19.494628] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e62a 00:02:40.250 [2024-07-10 13:26:19.494672] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.250 [2024-07-10 13:26:19.494724] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:02:40.250 [2024-07-10 13:26:19.494767] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.494810] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.494855] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.494911] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.494957] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.250 [2024-07-10 13:26:19.494999] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b53a6a23 00:02:40.250 [2024-07-10 13:26:19.495042] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.250 [2024-07-10 13:26:19.495087] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.250 [2024-07-10 13:26:19.495129] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495172] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495217] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.495261] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.495305] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.250 [2024-07-10 13:26:19.495348] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.250 passed 00:02:40.250 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-10 13:26:19.495401] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.250 [2024-07-10 13:26:19.495442] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:02:40.250 [2024-07-10 13:26:19.495487] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495531] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495576] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.495620] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.495664] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.250 [2024-07-10 13:26:19.495706] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e62a 00:02:40.250 [2024-07-10 13:26:19.495756] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.250 [2024-07-10 13:26:19.495802] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:02:40.250 [2024-07-10 13:26:19.495846] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495890] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.495934] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.495977] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.496021] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.250 [2024-07-10 13:26:19.496063] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b53a6a23 00:02:40.250 [2024-07-10 13:26:19.496106] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.250 [2024-07-10 13:26:19.496151] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.250 [2024-07-10 13:26:19.496195] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496240] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496283] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.496327] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.496370] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.250 [2024-07-10 13:26:19.496413] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.250 passed 00:02:40.250 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-10 13:26:19.496458] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.250 [2024-07-10 13:26:19.496502] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:02:40.250 [2024-07-10 13:26:19.496545] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496589] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496630] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.496670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.250 [2024-07-10 13:26:19.496710] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.250 [2024-07-10 13:26:19.496749] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e62a 00:02:40.250 passed 00:02:40.250 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-10 13:26:19.496796] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.250 [2024-07-10 13:26:19.496839] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:02:40.250 [2024-07-10 13:26:19.496883] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496926] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.496970] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.497015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.250 [2024-07-10 13:26:19.497059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.250 [2024-07-10 13:26:19.497101] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b53a6a23 00:02:40.250 [2024-07-10 13:26:19.497144] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.250 [2024-07-10 13:26:19.497188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.250 [2024-07-10 13:26:19.497231] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.497274] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.497318] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.497362] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.250 [2024-07-10 13:26:19.497406] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.250 [2024-07-10 13:26:19.497449] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.250 passed 00:02:40.250 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-10 13:26:19.497495] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.250 [2024-07-10 13:26:19.497535] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:02:40.250 [2024-07-10 13:26:19.497579] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.250 [2024-07-10 13:26:19.497622] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.497679] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.251 [2024-07-10 13:26:19.497721] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.251 [2024-07-10 13:26:19.497766] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.251 [2024-07-10 13:26:19.497809] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e62a 00:02:40.251 passed 00:02:40.251 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-10 13:26:19.497855] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.251 [2024-07-10 13:26:19.497896] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:02:40.251 [2024-07-10 13:26:19.497940] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.497983] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.498027] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.251 [2024-07-10 13:26:19.498070] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.251 [2024-07-10 13:26:19.498113] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.251 [2024-07-10 13:26:19.498157] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b53a6a23 00:02:40.251 [2024-07-10 13:26:19.498201] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.251 [2024-07-10 13:26:19.498244] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=89010a2d4837a266, Actual=88010a2d4837a266 00:02:40.251 [2024-07-10 13:26:19.498288] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.498331] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.498375] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.251 [2024-07-10 13:26:19.498418] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.251 [2024-07-10 13:26:19.498462] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.251 [2024-07-10 13:26:19.498506] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=280821d80088dbb4 00:02:40.251 passed 00:02:40.251 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:02:40.251 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:02:40.251 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:02:40.251 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:26:19.502814] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:02:40.251 [2024-07-10 13:26:19.502941] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=147e, Actual=157e 00:02:40.251 [2024-07-10 13:26:19.503063] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.503181] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.503299] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.251 [2024-07-10 13:26:19.503418] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.251 [2024-07-10 13:26:19.503536] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=e0f 00:02:40.251 [2024-07-10 13:26:19.503654] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=569c 00:02:40.251 [2024-07-10 13:26:19.503772] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:02:40.251 [2024-07-10 13:26:19.503891] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff6969, Actual=1ff6969 00:02:40.251 [2024-07-10 13:26:19.504010] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.504128] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.504248] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.251 [2024-07-10 13:26:19.504367] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.251 [2024-07-10 13:26:19.504485] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=c6062949 00:02:40.251 [2024-07-10 13:26:19.504603] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=a64b591a 00:02:40.251 [2024-07-10 13:26:19.504721] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.251 [2024-07-10 13:26:19.504840] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1c07f8d13da38560, Actual=1d07f8d13da38560 00:02:40.251 [2024-07-10 13:26:19.504959] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505077] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505195] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.251 [2024-07-10 13:26:19.505314] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.251 [2024-07-10 13:26:19.505447] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.251 [2024-07-10 13:26:19.505567] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=6d8d0e7d388acd2f 00:02:40.251 passed 00:02:40.251 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-10 13:26:19.505602] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.251 [2024-07-10 13:26:19.505632] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:02:40.251 [2024-07-10 13:26:19.505671] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505704] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505733] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.251 [2024-07-10 13:26:19.505765] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.251 [2024-07-10 13:26:19.505797] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.251 [2024-07-10 13:26:19.505830] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=351d 00:02:40.251 [2024-07-10 13:26:19.505863] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.251 [2024-07-10 13:26:19.505896] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c17f5c9c, Actual=c07f5c9c 00:02:40.251 [2024-07-10 13:26:19.505926] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505958] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.505991] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.251 [2024-07-10 13:26:19.506023] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.251 [2024-07-10 13:26:19.506054] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.251 [2024-07-10 13:26:19.506086] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=67cb6cef 00:02:40.251 [2024-07-10 13:26:19.506119] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.251 [2024-07-10 13:26:19.506153] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e9e5f7421b4c013f, Actual=e8e5f7421b4c013f 00:02:40.251 [2024-07-10 13:26:19.506186] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.506218] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.251 [2024-07-10 13:26:19.506251] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.251 [2024-07-10 13:26:19.506283] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.251 [2024-07-10 13:26:19.506315] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.251 [2024-07-10 13:26:19.506348] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=986f01ee1e654970 00:02:40.251 passed 00:02:40.251 Test: dix_sec_512_md_0_error ...passed 00:02:40.251 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-10 13:26:19.506359] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:02:40.251 passed 00:02:40.251 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:02:40.251 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:02:40.251 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:02:40.251 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:02:40.251 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:02:40.251 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:02:40.251 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:02:40.251 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:02:40.251 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:26:19.510708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:02:40.251 [2024-07-10 13:26:19.510842] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=147e, Actual=157e 00:02:40.252 [2024-07-10 13:26:19.510967] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.511090] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.511209] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.252 [2024-07-10 13:26:19.511328] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:02:40.252 [2024-07-10 13:26:19.511447] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=e0f 00:02:40.252 [2024-07-10 13:26:19.511566] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=569c 00:02:40.252 [2024-07-10 13:26:19.511682] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:02:40.252 [2024-07-10 13:26:19.511804] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff6969, Actual=1ff6969 00:02:40.252 [2024-07-10 13:26:19.511918] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.512039] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.512165] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.252 [2024-07-10 13:26:19.512283] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000000000005c 00:02:40.252 [2024-07-10 13:26:19.512401] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=c6062949 00:02:40.252 [2024-07-10 13:26:19.512522] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=a64b591a 00:02:40.252 [2024-07-10 13:26:19.512636] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.252 [2024-07-10 13:26:19.512757] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1c07f8d13da38560, Actual=1d07f8d13da38560 00:02:40.252 [2024-07-10 13:26:19.512878] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513001] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513122] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.252 [2024-07-10 13:26:19.513238] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1000000005c 00:02:40.252 [2024-07-10 13:26:19.513360] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.252 passed 00:02:40.252 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-10 13:26:19.513479] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=6d8d0e7d388acd2f 00:02:40.252 [2024-07-10 13:26:19.513524] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:02:40.252 [2024-07-10 13:26:19.513554] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:02:40.252 [2024-07-10 13:26:19.513584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513614] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513652] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.252 [2024-07-10 13:26:19.513682] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:02:40.252 [2024-07-10 13:26:19.513711] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e0f 00:02:40.252 [2024-07-10 13:26:19.513741] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=351d 00:02:40.252 [2024-07-10 13:26:19.513770] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:02:40.252 [2024-07-10 13:26:19.513799] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c17f5c9c, Actual=c07f5c9c 00:02:40.252 [2024-07-10 13:26:19.513828] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513857] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.513886] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.252 [2024-07-10 13:26:19.513915] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000000058 00:02:40.252 [2024-07-10 13:26:19.513944] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c6062949 00:02:40.252 [2024-07-10 13:26:19.513973] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=67cb6cef 00:02:40.252 [2024-07-10 13:26:19.514003] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a476a7728ecc20d3, Actual=a576a7728ecc20d3 00:02:40.252 [2024-07-10 13:26:19.514033] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e9e5f7421b4c013f, Actual=e8e5f7421b4c013f 00:02:40.252 [2024-07-10 13:26:19.514062] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.514091] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:02:40.252 [2024-07-10 13:26:19.514121] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.252 [2024-07-10 13:26:19.514150] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:02:40.252 [2024-07-10 13:26:19.514180] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=92ba7493cea5ea6f 00:02:40.252 passed 00:02:40.252 Test: set_md_interleave_iovs_test ...[2024-07-10 13:26:19.514209] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=986f01ee1e654970 00:02:40.252 passed 00:02:40.252 Test: set_md_interleave_iovs_split_test ...passed 00:02:40.252 Test: dif_generate_stream_pi_16_test ...passed 00:02:40.252 Test: dif_generate_stream_test ...passed 00:02:40.252 Test: set_md_interleave_iovs_alignment_test ...passed 00:02:40.252 Test: dif_generate_split_test ...[2024-07-10 13:26:19.514859] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:02:40.252 passed 00:02:40.252 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:02:40.252 Test: dif_verify_split_test ...passed 00:02:40.252 Test: dif_verify_stream_multi_segments_test ...passed 00:02:40.252 Test: update_crc32c_pi_16_test ...passed 00:02:40.252 Test: update_crc32c_test ...passed 00:02:40.252 Test: dif_update_crc32c_split_test ...passed 00:02:40.252 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:02:40.252 Test: get_range_with_md_test ...passed 00:02:40.252 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:02:40.252 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:02:40.252 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:02:40.252 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:02:40.252 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:02:40.252 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:02:40.252 Test: dif_generate_and_verify_unmap_test ...passed 00:02:40.252 00:02:40.252 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.252 suites 1 1 n/a 0 0 00:02:40.252 tests 79 79 79 0 0 00:02:40.252 asserts 3584 3584 3584 0 n/a 00:02:40.252 00:02:40.252 Elapsed time = 0.047 seconds 00:02:40.252 13:26:19 -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:02:40.252 00:02:40.252 00:02:40.252 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.252 http://cunit.sourceforge.net/ 00:02:40.252 00:02:40.252 00:02:40.252 Suite: iov 00:02:40.252 Test: test_single_iov ...passed 00:02:40.252 Test: test_simple_iov ...passed 00:02:40.252 Test: test_complex_iov ...passed 00:02:40.252 Test: test_iovs_to_buf ...passed 00:02:40.252 Test: test_buf_to_iovs ...passed 00:02:40.252 Test: test_memset ...passed 00:02:40.253 Test: test_iov_one ...passed 00:02:40.253 Test: test_iov_xfer ...passed 00:02:40.253 00:02:40.253 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.253 suites 1 1 n/a 0 0 00:02:40.253 tests 8 8 8 0 0 00:02:40.253 asserts 156 156 156 0 n/a 00:02:40.253 00:02:40.253 Elapsed time = 0.000 seconds 00:02:40.253 13:26:19 -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:02:40.253 00:02:40.253 00:02:40.253 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.253 http://cunit.sourceforge.net/ 00:02:40.253 00:02:40.253 00:02:40.253 Suite: math 00:02:40.253 Test: test_serial_number_arithmetic ...passed 00:02:40.253 Suite: erase 00:02:40.253 Test: test_memset_s ...passed 00:02:40.253 00:02:40.253 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.253 suites 2 2 n/a 0 0 00:02:40.253 tests 2 2 2 0 0 00:02:40.253 asserts 18 18 18 0 n/a 00:02:40.253 00:02:40.253 Elapsed time = 0.000 seconds 00:02:40.253 13:26:19 -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:02:40.253 00:02:40.253 00:02:40.253 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.253 http://cunit.sourceforge.net/ 00:02:40.253 00:02:40.253 00:02:40.253 Suite: pipe 00:02:40.253 Test: test_create_destroy ...passed 00:02:40.253 Test: test_write_get_buffer ...passed 00:02:40.253 Test: test_write_advance ...passed 00:02:40.253 Test: test_read_get_buffer ...passed 00:02:40.253 Test: test_read_advance ...passed 00:02:40.253 Test: test_data ...passed 00:02:40.253 00:02:40.253 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.253 suites 1 1 n/a 0 0 00:02:40.253 tests 6 6 6 0 0 00:02:40.253 asserts 250 250 250 0 n/a 00:02:40.253 00:02:40.253 Elapsed time = 0.000 seconds 00:02:40.253 13:26:19 -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:02:40.253 00:02:40.253 00:02:40.253 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.253 http://cunit.sourceforge.net/ 00:02:40.253 00:02:40.253 00:02:40.253 Suite: xor 00:02:40.253 Test: test_xor_gen ...passed 00:02:40.253 00:02:40.253 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.253 suites 1 1 n/a 0 0 00:02:40.253 tests 1 1 1 0 0 00:02:40.253 asserts 17 17 17 0 n/a 00:02:40.253 00:02:40.253 Elapsed time = 0.000 seconds 00:02:40.253 00:02:40.253 real 0m0.151s 00:02:40.253 user 0m0.071s 00:02:40.253 sys 0m0.063s 00:02:40.253 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.253 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.253 ************************************ 00:02:40.253 END TEST unittest_util 00:02:40.253 ************************************ 00:02:40.253 13:26:19 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:40.253 13:26:19 -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:02:40.253 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.253 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.253 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.253 ************************************ 00:02:40.253 START TEST unittest_dma 00:02:40.253 ************************************ 00:02:40.253 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:02:40.253 00:02:40.253 00:02:40.253 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.253 http://cunit.sourceforge.net/ 00:02:40.253 00:02:40.253 00:02:40.253 Suite: dma_suite 00:02:40.253 Test: test_dma ...[2024-07-10 13:26:19.588800] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:02:40.253 passed 00:02:40.253 00:02:40.253 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.253 suites 1 1 n/a 0 0 00:02:40.253 tests 1 1 1 0 0 00:02:40.253 asserts 50 50 50 0 n/a 00:02:40.253 00:02:40.253 Elapsed time = 0.000 seconds 00:02:40.253 00:02:40.253 real 0m0.004s 00:02:40.253 user 0m0.004s 00:02:40.253 sys 0m0.003s 00:02:40.253 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.253 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.253 ************************************ 00:02:40.253 END TEST unittest_dma 00:02:40.253 ************************************ 00:02:40.511 13:26:19 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:02:40.511 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.511 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.511 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.511 ************************************ 00:02:40.511 START TEST unittest_init 00:02:40.511 ************************************ 00:02:40.511 13:26:19 -- common/autotest_common.sh@1104 -- # unittest_init 00:02:40.511 13:26:19 -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:02:40.511 00:02:40.511 00:02:40.511 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.511 http://cunit.sourceforge.net/ 00:02:40.511 00:02:40.511 00:02:40.511 Suite: subsystem_suite 00:02:40.511 Test: subsystem_sort_test_depends_on_single ...passed 00:02:40.511 Test: subsystem_sort_test_depends_on_multiple ...passed 00:02:40.511 Test: subsystem_sort_test_missing_dependency ...[2024-07-10 13:26:19.634376] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:02:40.511 passed 00:02:40.511 00:02:40.511 [2024-07-10 13:26:19.634570] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:02:40.511 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.511 suites 1 1 n/a 0 0 00:02:40.511 tests 3 3 3 0 0 00:02:40.511 asserts 20 20 20 0 n/a 00:02:40.511 00:02:40.511 Elapsed time = 0.000 seconds 00:02:40.511 00:02:40.511 real 0m0.006s 00:02:40.511 user 0m0.005s 00:02:40.511 sys 0m0.004s 00:02:40.511 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.511 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.511 ************************************ 00:02:40.511 END TEST unittest_init 00:02:40.511 ************************************ 00:02:40.511 13:26:19 -- unit/unittest.sh@289 -- # '[' no = yes ']' 00:02:40.511 13:26:19 -- unit/unittest.sh@302 -- # set +x 00:02:40.511 00:02:40.511 00:02:40.511 ===================== 00:02:40.511 All unit tests passed 00:02:40.511 ===================== 00:02:40.511 WARN: lcov not installed or SPDK built without coverage! 00:02:40.511 WARN: neither valgrind nor ASAN is enabled! 00:02:40.511 00:02:40.511 00:02:40.511 00:02:40.511 real 0m14.170s 00:02:40.511 user 0m11.294s 00:02:40.511 sys 0m1.758s 00:02:40.511 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.511 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.511 ************************************ 00:02:40.511 END TEST unittest 00:02:40.511 ************************************ 00:02:40.511 13:26:19 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:02:40.511 13:26:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:02:40.511 13:26:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:02:40.511 13:26:19 -- spdk/autotest.sh@173 -- # timing_enter lib 00:02:40.511 13:26:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:40.511 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.511 13:26:19 -- spdk/autotest.sh@175 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:02:40.511 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.511 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.511 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.511 ************************************ 00:02:40.511 START TEST env 00:02:40.511 ************************************ 00:02:40.511 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:02:40.770 * Looking for test storage... 00:02:40.770 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:02:40.770 13:26:19 -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:02:40.770 13:26:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.770 13:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.770 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.770 ************************************ 00:02:40.770 START TEST env_memory 00:02:40.770 ************************************ 00:02:40.770 13:26:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:02:40.770 00:02:40.770 00:02:40.770 CUnit - A unit testing framework for C - Version 2.1-3 00:02:40.770 http://cunit.sourceforge.net/ 00:02:40.770 00:02:40.770 00:02:40.770 Suite: memory 00:02:40.770 Test: alloc and free memory map ...[2024-07-10 13:26:19.950728] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:02:40.770 passed 00:02:40.770 Test: mem map translation ...[2024-07-10 13:26:19.959895] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:02:40.770 [2024-07-10 13:26:19.959943] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:02:40.770 [2024-07-10 13:26:19.959959] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:02:40.770 [2024-07-10 13:26:19.959968] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:02:40.770 passed 00:02:40.770 Test: mem map registration ...[2024-07-10 13:26:19.968510] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:02:40.770 [2024-07-10 13:26:19.968537] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:02:40.770 passed 00:02:40.770 Test: mem map adjacent registrations ...passed 00:02:40.770 00:02:40.770 Run Summary: Type Total Ran Passed Failed Inactive 00:02:40.770 suites 1 1 n/a 0 0 00:02:40.770 tests 4 4 4 0 0 00:02:40.770 asserts 152 152 152 0 n/a 00:02:40.770 00:02:40.770 Elapsed time = 0.031 seconds 00:02:40.770 00:02:40.770 real 0m0.048s 00:02:40.770 user 0m0.039s 00:02:40.770 sys 0m0.009s 00:02:40.770 13:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.770 13:26:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.770 ************************************ 00:02:40.770 END TEST env_memory 00:02:40.770 ************************************ 00:02:40.770 13:26:20 -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:02:40.770 13:26:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.770 13:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.770 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:02:40.770 ************************************ 00:02:40.770 START TEST env_vtophys 00:02:40.770 ************************************ 00:02:40.770 13:26:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:02:40.770 EAL: lib.eal log level changed from notice to debug 00:02:40.770 EAL: Sysctl reports 10 cpus 00:02:40.770 EAL: Detected lcore 0 as core 0 on socket 0 00:02:40.770 EAL: Detected lcore 1 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 2 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 3 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 4 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 5 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 6 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 7 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 8 as core 0 on socket 0 00:02:40.771 EAL: Detected lcore 9 as core 0 on socket 0 00:02:40.771 EAL: Maximum logical cores by configuration: 128 00:02:40.771 EAL: Detected CPU lcores: 10 00:02:40.771 EAL: Detected NUMA nodes: 1 00:02:40.771 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:02:40.771 EAL: Checking presence of .so 'librte_eal.so.24' 00:02:40.771 EAL: Checking presence of .so 'librte_eal.so' 00:02:40.771 EAL: Detected static linkage of DPDK 00:02:40.771 EAL: No shared files mode enabled, IPC will be disabled 00:02:40.771 EAL: PCI scan found 10 devices 00:02:40.771 EAL: Specific IOVA mode is not requested, autodetecting 00:02:40.771 EAL: Selecting IOVA mode according to bus requests 00:02:40.771 EAL: Bus pci wants IOVA as 'PA' 00:02:40.771 EAL: Selected IOVA mode 'PA' 00:02:40.771 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:02:40.771 EAL: Ask a virtual area of 0x2e000 bytes 00:02:40.771 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x100044f000) not respected! 00:02:40.771 EAL: This may cause issues with mapping memory into secondary processes 00:02:40.771 EAL: Virtual area found at 0x100044f000 (size = 0x2e000) 00:02:40.771 EAL: Setting up physically contiguous memory... 00:02:40.771 EAL: Ask a virtual area of 0x1000 bytes 00:02:40.771 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000d8e000) not respected! 00:02:40.771 EAL: This may cause issues with mapping memory into secondary processes 00:02:40.771 EAL: Virtual area found at 0x1000d8e000 (size = 0x1000) 00:02:40.771 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:02:40.771 EAL: Ask a virtual area of 0xf0000000 bytes 00:02:40.771 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:02:40.771 EAL: This may cause issues with mapping memory into secondary processes 00:02:40.771 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:02:40.771 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:02:40.771 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x210000000, len 268435456 00:02:41.029 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:02:41.030 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x230000000, len 268435456 00:02:41.030 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x240000000, len 268435456 00:02:41.030 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x250000000, len 268435456 00:02:41.030 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x260000000, len 268435456 00:02:41.289 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x270000000, len 268435456 00:02:41.289 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x280000000, len 268435456 00:02:41.289 EAL: No shared files mode enabled, IPC is disabled 00:02:41.289 EAL: Added 2048M to heap on socket 0 00:02:41.289 EAL: TSC is not safe to use in SMP mode 00:02:41.289 EAL: TSC is not invariant 00:02:41.289 EAL: TSC frequency is ~2294610 KHz 00:02:41.289 EAL: Main lcore 0 is ready (tid=82d412000;cpuset=[0]) 00:02:41.289 EAL: PCI scan found 10 devices 00:02:41.289 EAL: Registering mem event callbacks not supported 00:02:41.289 00:02:41.289 00:02:41.289 CUnit - A unit testing framework for C - Version 2.1-3 00:02:41.289 http://cunit.sourceforge.net/ 00:02:41.289 00:02:41.289 00:02:41.289 Suite: components_suite 00:02:41.289 Test: vtophys_malloc_test ...passed 00:02:41.549 Test: vtophys_spdk_malloc_test ...passed 00:02:41.549 00:02:41.549 Run Summary: Type Total Ran Passed Failed Inactive 00:02:41.549 suites 1 1 n/a 0 0 00:02:41.549 tests 2 2 2 0 0 00:02:41.549 asserts 497 497 497 0 n/a 00:02:41.549 00:02:41.549 Elapsed time = 0.297 seconds 00:02:41.549 00:02:41.549 real 0m0.803s 00:02:41.549 user 0m0.292s 00:02:41.549 sys 0m0.508s 00:02:41.549 13:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.549 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.549 ************************************ 00:02:41.549 END TEST env_vtophys 00:02:41.549 ************************************ 00:02:41.549 13:26:20 -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:02:41.549 13:26:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:41.549 13:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:41.549 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.549 ************************************ 00:02:41.549 START TEST env_pci 00:02:41.549 ************************************ 00:02:41.549 13:26:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:02:41.549 00:02:41.549 00:02:41.549 CUnit - A unit testing framework for C - Version 2.1-3 00:02:41.549 http://cunit.sourceforge.net/ 00:02:41.549 00:02:41.549 00:02:41.549 Suite: pci 00:02:41.549 Test: pci_hook ...passed 00:02:41.549 00:02:41.549 Run Summary: Type Total Ran Passed Failed Inactive 00:02:41.549 suites 1 1 n/a 0 0 00:02:41.549 tests 1 1 1 0 0 00:02:41.549 asserts 25 25 25 0 n/a 00:02:41.549 00:02:41.549 Elapsed time = 0.000 seconds 00:02:41.549 EAL: Cannot find device (10000:00:01.0) 00:02:41.549 EAL: Failed to attach device on primary process 00:02:41.549 00:02:41.549 real 0m0.014s 00:02:41.549 user 0m0.001s 00:02:41.549 sys 0m0.014s 00:02:41.549 13:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.549 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.549 ************************************ 00:02:41.549 END TEST env_pci 00:02:41.549 ************************************ 00:02:41.808 13:26:20 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:02:41.808 13:26:20 -- env/env.sh@15 -- # uname 00:02:41.808 13:26:20 -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:02:41.808 13:26:20 -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:02:41.808 13:26:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:02:41.808 13:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:41.808 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.808 ************************************ 00:02:41.808 START TEST env_dpdk_post_init 00:02:41.808 ************************************ 00:02:41.808 13:26:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:02:41.808 EAL: Sysctl reports 10 cpus 00:02:41.808 EAL: Detected CPU lcores: 10 00:02:41.808 EAL: Detected NUMA nodes: 1 00:02:41.808 EAL: Detected static linkage of DPDK 00:02:41.808 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:41.808 EAL: Selected IOVA mode 'PA' 00:02:41.808 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:02:41.808 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x210000000, len 268435456 00:02:41.808 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:02:41.808 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x230000000, len 268435456 00:02:42.067 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x240000000, len 268435456 00:02:42.067 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x250000000, len 268435456 00:02:42.067 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x260000000, len 268435456 00:02:42.067 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x270000000, len 268435456 00:02:42.067 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x280000000, len 268435456 00:02:42.067 EAL: TSC is not safe to use in SMP mode 00:02:42.067 EAL: TSC is not invariant 00:02:42.067 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:42.067 [2024-07-10 13:26:21.417420] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:02:42.067 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:02:42.067 Starting DPDK initialization... 00:02:42.067 Starting SPDK post initialization... 00:02:42.067 SPDK NVMe probe 00:02:42.067 Attaching to 0000:00:06.0 00:02:42.067 Attached to 0000:00:06.0 00:02:42.067 Cleaning up... 00:02:42.326 00:02:42.326 real 0m0.519s 00:02:42.326 user 0m0.012s 00:02:42.326 sys 0m0.501s 00:02:42.326 13:26:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.326 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:02:42.326 ************************************ 00:02:42.326 END TEST env_dpdk_post_init 00:02:42.326 ************************************ 00:02:42.326 13:26:21 -- env/env.sh@26 -- # uname 00:02:42.326 13:26:21 -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:02:42.326 00:02:42.326 real 0m1.788s 00:02:42.326 user 0m0.584s 00:02:42.326 sys 0m1.255s 00:02:42.326 13:26:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.326 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:02:42.326 ************************************ 00:02:42.326 END TEST env 00:02:42.326 ************************************ 00:02:42.326 13:26:21 -- spdk/autotest.sh@176 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:02:42.326 13:26:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:42.326 13:26:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:42.326 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:02:42.326 ************************************ 00:02:42.326 START TEST rpc 00:02:42.326 ************************************ 00:02:42.326 13:26:21 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:02:42.585 * Looking for test storage... 00:02:42.585 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:02:42.585 13:26:21 -- rpc/rpc.sh@65 -- # spdk_pid=45200 00:02:42.585 13:26:21 -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:02:42.585 13:26:21 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:42.585 13:26:21 -- rpc/rpc.sh@67 -- # waitforlisten 45200 00:02:42.585 13:26:21 -- common/autotest_common.sh@819 -- # '[' -z 45200 ']' 00:02:42.585 13:26:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:42.585 13:26:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:42.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:42.585 13:26:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:42.585 13:26:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:42.585 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:02:42.585 [2024-07-10 13:26:21.774901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:42.585 [2024-07-10 13:26:21.775208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:02:43.153 EAL: TSC is not safe to use in SMP mode 00:02:43.153 EAL: TSC is not invariant 00:02:43.153 [2024-07-10 13:26:22.240539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:43.153 [2024-07-10 13:26:22.326791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:02:43.153 [2024-07-10 13:26:22.326900] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:02:43.153 [2024-07-10 13:26:22.326910] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45200' to capture a snapshot of events at runtime. 00:02:43.153 [2024-07-10 13:26:22.326933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:02:43.719 13:26:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:02:43.719 13:26:22 -- common/autotest_common.sh@852 -- # return 0 00:02:43.719 13:26:22 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:02:43.719 13:26:22 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:02:43.719 13:26:22 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:02:43.719 13:26:22 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:02:43.719 13:26:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:43.719 13:26:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 ************************************ 00:02:43.719 START TEST rpc_integrity 00:02:43.719 ************************************ 00:02:43.719 13:26:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:02:43.719 13:26:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:43.719 13:26:22 -- rpc/rpc.sh@13 -- # jq length 00:02:43.719 13:26:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:43.719 13:26:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:02:43.719 13:26:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:43.719 { 00:02:43.719 "name": "Malloc0", 00:02:43.719 "aliases": [ 00:02:43.719 "001dcb05-3ec0-11ef-b9c4-5b09e08d4792" 00:02:43.719 ], 00:02:43.719 "product_name": "Malloc disk", 00:02:43.719 "block_size": 512, 00:02:43.719 "num_blocks": 16384, 00:02:43.719 "uuid": "001dcb05-3ec0-11ef-b9c4-5b09e08d4792", 00:02:43.719 "assigned_rate_limits": { 00:02:43.719 "rw_ios_per_sec": 0, 00:02:43.719 "rw_mbytes_per_sec": 0, 00:02:43.719 "r_mbytes_per_sec": 0, 00:02:43.719 "w_mbytes_per_sec": 0 00:02:43.719 }, 00:02:43.719 "claimed": false, 00:02:43.719 "zoned": false, 00:02:43.719 "supported_io_types": { 00:02:43.719 "read": true, 00:02:43.719 "write": true, 00:02:43.719 "unmap": true, 00:02:43.719 "write_zeroes": true, 00:02:43.719 "flush": true, 00:02:43.719 "reset": true, 00:02:43.719 "compare": false, 00:02:43.719 "compare_and_write": false, 00:02:43.719 "abort": true, 00:02:43.719 "nvme_admin": false, 00:02:43.719 "nvme_io": false 00:02:43.719 }, 00:02:43.719 "memory_domains": [ 00:02:43.719 { 00:02:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.719 "dma_device_type": 2 00:02:43.719 } 00:02:43.719 ], 00:02:43.719 "driver_specific": {} 00:02:43.719 } 00:02:43.719 ]' 00:02:43.719 13:26:22 -- rpc/rpc.sh@17 -- # jq length 00:02:43.719 13:26:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:43.719 13:26:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 [2024-07-10 13:26:22.873001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:02:43.719 [2024-07-10 13:26:22.873047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:43.719 [2024-07-10 13:26:22.873673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa5d780 00:02:43.719 [2024-07-10 13:26:22.873704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:43.719 [2024-07-10 13:26:22.874497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:43.719 [2024-07-10 13:26:22.874539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:43.719 Passthru0 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:43.719 { 00:02:43.719 "name": "Malloc0", 00:02:43.719 "aliases": [ 00:02:43.719 "001dcb05-3ec0-11ef-b9c4-5b09e08d4792" 00:02:43.719 ], 00:02:43.719 "product_name": "Malloc disk", 00:02:43.719 "block_size": 512, 00:02:43.719 "num_blocks": 16384, 00:02:43.719 "uuid": "001dcb05-3ec0-11ef-b9c4-5b09e08d4792", 00:02:43.719 "assigned_rate_limits": { 00:02:43.719 "rw_ios_per_sec": 0, 00:02:43.719 "rw_mbytes_per_sec": 0, 00:02:43.719 "r_mbytes_per_sec": 0, 00:02:43.719 "w_mbytes_per_sec": 0 00:02:43.719 }, 00:02:43.719 "claimed": true, 00:02:43.719 "claim_type": "exclusive_write", 00:02:43.719 "zoned": false, 00:02:43.719 "supported_io_types": { 00:02:43.719 "read": true, 00:02:43.719 "write": true, 00:02:43.719 "unmap": true, 00:02:43.719 "write_zeroes": true, 00:02:43.719 "flush": true, 00:02:43.719 "reset": true, 00:02:43.719 "compare": false, 00:02:43.719 "compare_and_write": false, 00:02:43.719 "abort": true, 00:02:43.719 "nvme_admin": false, 00:02:43.719 "nvme_io": false 00:02:43.719 }, 00:02:43.719 "memory_domains": [ 00:02:43.719 { 00:02:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.719 "dma_device_type": 2 00:02:43.719 } 00:02:43.719 ], 00:02:43.719 "driver_specific": {} 00:02:43.719 }, 00:02:43.719 { 00:02:43.719 "name": "Passthru0", 00:02:43.719 "aliases": [ 00:02:43.719 "229f023e-3086-c859-b86b-e14ea7bccac0" 00:02:43.719 ], 00:02:43.719 "product_name": "passthru", 00:02:43.719 "block_size": 512, 00:02:43.719 "num_blocks": 16384, 00:02:43.719 "uuid": "229f023e-3086-c859-b86b-e14ea7bccac0", 00:02:43.719 "assigned_rate_limits": { 00:02:43.719 "rw_ios_per_sec": 0, 00:02:43.719 "rw_mbytes_per_sec": 0, 00:02:43.719 "r_mbytes_per_sec": 0, 00:02:43.719 "w_mbytes_per_sec": 0 00:02:43.719 }, 00:02:43.719 "claimed": false, 00:02:43.719 "zoned": false, 00:02:43.719 "supported_io_types": { 00:02:43.719 "read": true, 00:02:43.719 "write": true, 00:02:43.719 "unmap": true, 00:02:43.719 "write_zeroes": true, 00:02:43.719 "flush": true, 00:02:43.719 "reset": true, 00:02:43.719 "compare": false, 00:02:43.719 "compare_and_write": false, 00:02:43.719 "abort": true, 00:02:43.719 "nvme_admin": false, 00:02:43.719 "nvme_io": false 00:02:43.719 }, 00:02:43.719 "memory_domains": [ 00:02:43.719 { 00:02:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.719 "dma_device_type": 2 00:02:43.719 } 00:02:43.719 ], 00:02:43.719 "driver_specific": { 00:02:43.719 "passthru": { 00:02:43.719 "name": "Passthru0", 00:02:43.719 "base_bdev_name": "Malloc0" 00:02:43.719 } 00:02:43.719 } 00:02:43.719 } 00:02:43.719 ]' 00:02:43.719 13:26:22 -- rpc/rpc.sh@21 -- # jq length 00:02:43.719 13:26:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:43.719 13:26:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:43.719 13:26:22 -- rpc/rpc.sh@26 -- # jq length 00:02:43.719 13:26:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:43.719 00:02:43.719 real 0m0.127s 00:02:43.719 user 0m0.025s 00:02:43.719 sys 0m0.045s 00:02:43.719 13:26:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 ************************************ 00:02:43.719 END TEST rpc_integrity 00:02:43.719 ************************************ 00:02:43.719 13:26:22 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:02:43.719 13:26:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:43.719 13:26:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 ************************************ 00:02:43.719 START TEST rpc_plugins 00:02:43.719 ************************************ 00:02:43.719 13:26:22 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:02:43.719 13:26:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:02:43.719 13:26:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:02:43.719 13:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.719 13:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.719 13:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.719 13:26:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:02:43.719 { 00:02:43.719 "name": "Malloc1", 00:02:43.719 "aliases": [ 00:02:43.719 "0033282d-3ec0-11ef-b9c4-5b09e08d4792" 00:02:43.719 ], 00:02:43.719 "product_name": "Malloc disk", 00:02:43.719 "block_size": 4096, 00:02:43.719 "num_blocks": 256, 00:02:43.719 "uuid": "0033282d-3ec0-11ef-b9c4-5b09e08d4792", 00:02:43.719 "assigned_rate_limits": { 00:02:43.719 "rw_ios_per_sec": 0, 00:02:43.719 "rw_mbytes_per_sec": 0, 00:02:43.719 "r_mbytes_per_sec": 0, 00:02:43.719 "w_mbytes_per_sec": 0 00:02:43.719 }, 00:02:43.719 "claimed": false, 00:02:43.719 "zoned": false, 00:02:43.719 "supported_io_types": { 00:02:43.719 "read": true, 00:02:43.719 "write": true, 00:02:43.719 "unmap": true, 00:02:43.719 "write_zeroes": true, 00:02:43.719 "flush": true, 00:02:43.719 "reset": true, 00:02:43.719 "compare": false, 00:02:43.719 "compare_and_write": false, 00:02:43.719 "abort": true, 00:02:43.719 "nvme_admin": false, 00:02:43.719 "nvme_io": false 00:02:43.719 }, 00:02:43.719 "memory_domains": [ 00:02:43.719 { 00:02:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.720 "dma_device_type": 2 00:02:43.720 } 00:02:43.720 ], 00:02:43.720 "driver_specific": {} 00:02:43.720 } 00:02:43.720 ]' 00:02:43.720 13:26:22 -- rpc/rpc.sh@32 -- # jq length 00:02:43.720 13:26:23 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:02:43.720 13:26:23 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:02:43.720 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.720 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.720 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.720 13:26:23 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:02:43.720 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.720 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.720 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.720 13:26:23 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:02:43.720 13:26:23 -- rpc/rpc.sh@36 -- # jq length 00:02:43.720 13:26:23 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:02:43.720 00:02:43.720 real 0m0.067s 00:02:43.720 user 0m0.022s 00:02:43.720 sys 0m0.013s 00:02:43.720 13:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.720 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.720 ************************************ 00:02:43.720 END TEST rpc_plugins 00:02:43.720 ************************************ 00:02:43.720 13:26:23 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:02:43.720 13:26:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:43.720 13:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:43.720 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.720 ************************************ 00:02:43.720 START TEST rpc_trace_cmd_test 00:02:43.720 ************************************ 00:02:43.720 13:26:23 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:02:43.720 13:26:23 -- rpc/rpc.sh@40 -- # local info 00:02:43.720 13:26:23 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:02:43.720 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.720 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.997 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.997 13:26:23 -- rpc/rpc.sh@42 -- # info='{ 00:02:43.997 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45200", 00:02:43.997 "tpoint_group_mask": "0x8", 00:02:43.997 "iscsi_conn": { 00:02:43.997 "mask": "0x2", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "scsi": { 00:02:43.997 "mask": "0x4", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "bdev": { 00:02:43.997 "mask": "0x8", 00:02:43.997 "tpoint_mask": "0xffffffffffffffff" 00:02:43.997 }, 00:02:43.997 "nvmf_rdma": { 00:02:43.997 "mask": "0x10", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "nvmf_tcp": { 00:02:43.997 "mask": "0x20", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "blobfs": { 00:02:43.997 "mask": "0x80", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "dsa": { 00:02:43.997 "mask": "0x200", 00:02:43.997 "tpoint_mask": "0x0" 00:02:43.997 }, 00:02:43.997 "thread": { 00:02:43.998 "mask": "0x400", 00:02:43.998 "tpoint_mask": "0x0" 00:02:43.998 }, 00:02:43.998 "nvme_pcie": { 00:02:43.998 "mask": "0x800", 00:02:43.998 "tpoint_mask": "0x0" 00:02:43.998 }, 00:02:43.998 "iaa": { 00:02:43.998 "mask": "0x1000", 00:02:43.998 "tpoint_mask": "0x0" 00:02:43.998 }, 00:02:43.998 "nvme_tcp": { 00:02:43.998 "mask": "0x2000", 00:02:43.998 "tpoint_mask": "0x0" 00:02:43.998 }, 00:02:43.998 "bdev_nvme": { 00:02:43.998 "mask": "0x4000", 00:02:43.998 "tpoint_mask": "0x0" 00:02:43.998 } 00:02:43.998 }' 00:02:43.998 13:26:23 -- rpc/rpc.sh@43 -- # jq length 00:02:43.998 13:26:23 -- rpc/rpc.sh@43 -- # '[' 14 -gt 2 ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:02:43.998 13:26:23 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:02:43.998 13:26:23 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:02:43.998 13:26:23 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:02:43.998 13:26:23 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:02:43.998 00:02:43.998 real 0m0.051s 00:02:43.998 user 0m0.029s 00:02:43.998 sys 0m0.016s 00:02:43.998 13:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 ************************************ 00:02:43.998 END TEST rpc_trace_cmd_test 00:02:43.998 ************************************ 00:02:43.998 13:26:23 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:02:43.998 13:26:23 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:02:43.998 13:26:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:43.998 13:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 ************************************ 00:02:43.998 START TEST rpc_daemon_integrity 00:02:43.998 ************************************ 00:02:43.998 13:26:23 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:02:43.998 13:26:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:43.998 13:26:23 -- rpc/rpc.sh@13 -- # jq length 00:02:43.998 13:26:23 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:02:43.998 13:26:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:43.998 { 00:02:43.998 "name": "Malloc2", 00:02:43.998 "aliases": [ 00:02:43.998 "0052e5ef-3ec0-11ef-b9c4-5b09e08d4792" 00:02:43.998 ], 00:02:43.998 "product_name": "Malloc disk", 00:02:43.998 "block_size": 512, 00:02:43.998 "num_blocks": 16384, 00:02:43.998 "uuid": "0052e5ef-3ec0-11ef-b9c4-5b09e08d4792", 00:02:43.998 "assigned_rate_limits": { 00:02:43.998 "rw_ios_per_sec": 0, 00:02:43.998 "rw_mbytes_per_sec": 0, 00:02:43.998 "r_mbytes_per_sec": 0, 00:02:43.998 "w_mbytes_per_sec": 0 00:02:43.998 }, 00:02:43.998 "claimed": false, 00:02:43.998 "zoned": false, 00:02:43.998 "supported_io_types": { 00:02:43.998 "read": true, 00:02:43.998 "write": true, 00:02:43.998 "unmap": true, 00:02:43.998 "write_zeroes": true, 00:02:43.998 "flush": true, 00:02:43.998 "reset": true, 00:02:43.998 "compare": false, 00:02:43.998 "compare_and_write": false, 00:02:43.998 "abort": true, 00:02:43.998 "nvme_admin": false, 00:02:43.998 "nvme_io": false 00:02:43.998 }, 00:02:43.998 "memory_domains": [ 00:02:43.998 { 00:02:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.998 "dma_device_type": 2 00:02:43.998 } 00:02:43.998 ], 00:02:43.998 "driver_specific": {} 00:02:43.998 } 00:02:43.998 ]' 00:02:43.998 13:26:23 -- rpc/rpc.sh@17 -- # jq length 00:02:43.998 13:26:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 [2024-07-10 13:26:23.217047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:02:43.998 [2024-07-10 13:26:23.217113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:43.998 [2024-07-10 13:26:23.217140] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa5d780 00:02:43.998 [2024-07-10 13:26:23.217147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:43.998 [2024-07-10 13:26:23.217688] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:43.998 [2024-07-10 13:26:23.217724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:43.998 Passthru0 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:43.998 { 00:02:43.998 "name": "Malloc2", 00:02:43.998 "aliases": [ 00:02:43.998 "0052e5ef-3ec0-11ef-b9c4-5b09e08d4792" 00:02:43.998 ], 00:02:43.998 "product_name": "Malloc disk", 00:02:43.998 "block_size": 512, 00:02:43.998 "num_blocks": 16384, 00:02:43.998 "uuid": "0052e5ef-3ec0-11ef-b9c4-5b09e08d4792", 00:02:43.998 "assigned_rate_limits": { 00:02:43.998 "rw_ios_per_sec": 0, 00:02:43.998 "rw_mbytes_per_sec": 0, 00:02:43.998 "r_mbytes_per_sec": 0, 00:02:43.998 "w_mbytes_per_sec": 0 00:02:43.998 }, 00:02:43.998 "claimed": true, 00:02:43.998 "claim_type": "exclusive_write", 00:02:43.998 "zoned": false, 00:02:43.998 "supported_io_types": { 00:02:43.998 "read": true, 00:02:43.998 "write": true, 00:02:43.998 "unmap": true, 00:02:43.998 "write_zeroes": true, 00:02:43.998 "flush": true, 00:02:43.998 "reset": true, 00:02:43.998 "compare": false, 00:02:43.998 "compare_and_write": false, 00:02:43.998 "abort": true, 00:02:43.998 "nvme_admin": false, 00:02:43.998 "nvme_io": false 00:02:43.998 }, 00:02:43.998 "memory_domains": [ 00:02:43.998 { 00:02:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.998 "dma_device_type": 2 00:02:43.998 } 00:02:43.998 ], 00:02:43.998 "driver_specific": {} 00:02:43.998 }, 00:02:43.998 { 00:02:43.998 "name": "Passthru0", 00:02:43.998 "aliases": [ 00:02:43.998 "116fca5f-7069-7b52-8988-4490f6a22c48" 00:02:43.998 ], 00:02:43.998 "product_name": "passthru", 00:02:43.998 "block_size": 512, 00:02:43.998 "num_blocks": 16384, 00:02:43.998 "uuid": "116fca5f-7069-7b52-8988-4490f6a22c48", 00:02:43.998 "assigned_rate_limits": { 00:02:43.998 "rw_ios_per_sec": 0, 00:02:43.998 "rw_mbytes_per_sec": 0, 00:02:43.998 "r_mbytes_per_sec": 0, 00:02:43.998 "w_mbytes_per_sec": 0 00:02:43.998 }, 00:02:43.998 "claimed": false, 00:02:43.998 "zoned": false, 00:02:43.998 "supported_io_types": { 00:02:43.998 "read": true, 00:02:43.998 "write": true, 00:02:43.998 "unmap": true, 00:02:43.998 "write_zeroes": true, 00:02:43.998 "flush": true, 00:02:43.998 "reset": true, 00:02:43.998 "compare": false, 00:02:43.998 "compare_and_write": false, 00:02:43.998 "abort": true, 00:02:43.998 "nvme_admin": false, 00:02:43.998 "nvme_io": false 00:02:43.998 }, 00:02:43.998 "memory_domains": [ 00:02:43.998 { 00:02:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:43.998 "dma_device_type": 2 00:02:43.998 } 00:02:43.998 ], 00:02:43.998 "driver_specific": { 00:02:43.998 "passthru": { 00:02:43.998 "name": "Passthru0", 00:02:43.998 "base_bdev_name": "Malloc2" 00:02:43.998 } 00:02:43.998 } 00:02:43.998 } 00:02:43.998 ]' 00:02:43.998 13:26:23 -- rpc/rpc.sh@21 -- # jq length 00:02:43.998 13:26:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:43.998 13:26:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:43.998 13:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 13:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:02:43.998 13:26:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:43.998 13:26:23 -- rpc/rpc.sh@26 -- # jq length 00:02:43.998 13:26:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:43.998 00:02:43.998 real 0m0.137s 00:02:43.998 user 0m0.011s 00:02:43.998 sys 0m0.067s 00:02:43.998 13:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.998 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:43.998 ************************************ 00:02:43.998 END TEST rpc_daemon_integrity 00:02:43.998 ************************************ 00:02:43.998 13:26:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:02:43.998 13:26:23 -- rpc/rpc.sh@84 -- # killprocess 45200 00:02:43.998 13:26:23 -- common/autotest_common.sh@926 -- # '[' -z 45200 ']' 00:02:43.998 13:26:23 -- common/autotest_common.sh@930 -- # kill -0 45200 00:02:43.998 13:26:23 -- common/autotest_common.sh@931 -- # uname 00:02:43.998 13:26:23 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:02:43.998 13:26:23 -- common/autotest_common.sh@934 -- # ps -c -o command 45200 00:02:43.998 13:26:23 -- common/autotest_common.sh@934 -- # tail -1 00:02:43.998 13:26:23 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:02:43.998 13:26:23 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:02:43.998 killing process with pid 45200 00:02:43.998 13:26:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45200' 00:02:43.998 13:26:23 -- common/autotest_common.sh@945 -- # kill 45200 00:02:43.998 13:26:23 -- common/autotest_common.sh@950 -- # wait 45200 00:02:44.352 00:02:44.352 real 0m1.987s 00:02:44.352 user 0m1.973s 00:02:44.352 sys 0m0.941s 00:02:44.352 13:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.352 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.352 ************************************ 00:02:44.352 END TEST rpc 00:02:44.352 ************************************ 00:02:44.353 13:26:23 -- spdk/autotest.sh@177 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:02:44.353 13:26:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:44.353 13:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:44.353 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.353 ************************************ 00:02:44.353 START TEST rpc_client 00:02:44.353 ************************************ 00:02:44.353 13:26:23 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:02:44.612 * Looking for test storage... 00:02:44.612 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:02:44.612 13:26:23 -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:02:44.612 OK 00:02:44.612 13:26:23 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:02:44.612 00:02:44.613 real 0m0.153s 00:02:44.613 user 0m0.094s 00:02:44.613 sys 0m0.127s 00:02:44.613 13:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.613 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.613 ************************************ 00:02:44.613 END TEST rpc_client 00:02:44.613 ************************************ 00:02:44.613 13:26:23 -- spdk/autotest.sh@178 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:02:44.613 13:26:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:44.613 13:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:44.613 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.613 ************************************ 00:02:44.613 START TEST json_config 00:02:44.613 ************************************ 00:02:44.613 13:26:23 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:02:44.613 13:26:23 -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:44.613 13:26:23 -- nvmf/common.sh@7 -- # uname -s 00:02:44.613 13:26:23 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:02:44.613 13:26:23 -- nvmf/common.sh@7 -- # return 0 00:02:44.613 13:26:23 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:02:44.613 13:26:23 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:02:44.613 13:26:23 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:02:44.613 13:26:23 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:02:44.613 13:26:23 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:02:44.613 13:26:23 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:02:44.613 13:26:23 -- json_config/json_config.sh@32 -- # declare -A app_params 00:02:44.613 13:26:23 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:02:44.613 13:26:23 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:02:44.613 13:26:23 -- json_config/json_config.sh@43 -- # last_event_id=0 00:02:44.613 13:26:23 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:02:44.613 INFO: JSON configuration test init 00:02:44.613 13:26:23 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:02:44.613 13:26:23 -- json_config/json_config.sh@420 -- # json_config_test_init 00:02:44.613 13:26:23 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:02:44.613 13:26:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:44.613 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.613 13:26:23 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:02:44.613 13:26:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:44.613 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.613 13:26:23 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:02:44.613 13:26:23 -- json_config/json_config.sh@98 -- # local app=target 00:02:44.613 13:26:23 -- json_config/json_config.sh@99 -- # shift 00:02:44.613 13:26:23 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:02:44.613 13:26:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:02:44.613 13:26:23 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:02:44.613 13:26:23 -- json_config/json_config.sh@111 -- # app_pid[$app]=45407 00:02:44.613 13:26:23 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:02:44.613 Waiting for target to run... 00:02:44.613 13:26:23 -- json_config/json_config.sh@114 -- # waitforlisten 45407 /var/tmp/spdk_tgt.sock 00:02:44.613 13:26:23 -- common/autotest_common.sh@819 -- # '[' -z 45407 ']' 00:02:44.613 13:26:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:44.613 13:26:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:44.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:44.613 13:26:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:44.613 13:26:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:44.613 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.613 [2024-07-10 13:26:23.943074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:44.613 [2024-07-10 13:26:23.943295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:02:44.871 EAL: TSC is not safe to use in SMP mode 00:02:44.871 EAL: TSC is not invariant 00:02:44.871 [2024-07-10 13:26:24.201426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:45.130 [2024-07-10 13:26:24.293890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:02:45.130 [2024-07-10 13:26:24.293993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:02:45.697 13:26:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:02:45.697 13:26:24 -- common/autotest_common.sh@852 -- # return 0 00:02:45.697 00:02:45.697 13:26:24 -- json_config/json_config.sh@115 -- # echo '' 00:02:45.697 13:26:24 -- json_config/json_config.sh@322 -- # create_accel_config 00:02:45.697 13:26:24 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:02:45.697 13:26:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:45.697 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:45.697 13:26:24 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:02:45.697 13:26:24 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:02:45.697 13:26:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:45.697 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:45.697 13:26:24 -- json_config/json_config.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:02:45.697 13:26:24 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:02:45.697 13:26:24 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:02:45.956 [2024-07-10 13:26:25.268067] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:02:46.215 13:26:25 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:02:46.215 13:26:25 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:02:46.215 13:26:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:46.215 13:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:46.215 13:26:25 -- json_config/json_config.sh@48 -- # local ret=0 00:02:46.215 13:26:25 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:02:46.215 13:26:25 -- json_config/json_config.sh@49 -- # local enabled_types 00:02:46.215 13:26:25 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:02:46.215 13:26:25 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:02:46.215 13:26:25 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:02:46.473 13:26:25 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:02:46.473 13:26:25 -- json_config/json_config.sh@51 -- # local get_types 00:02:46.473 13:26:25 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:02:46.473 13:26:25 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:02:46.473 13:26:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:46.473 13:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:46.473 13:26:25 -- json_config/json_config.sh@58 -- # return 0 00:02:46.473 13:26:25 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:02:46.473 13:26:25 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:02:46.473 13:26:25 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:02:46.473 13:26:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:46.473 13:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:46.473 13:26:25 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:02:46.473 13:26:25 -- json_config/json_config.sh@160 -- # local expected_notifications 00:02:46.473 13:26:25 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:02:46.473 13:26:25 -- json_config/json_config.sh@164 -- # get_notifications 00:02:46.473 13:26:25 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:02:46.473 13:26:25 -- json_config/json_config.sh@64 -- # IFS=: 00:02:46.473 13:26:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:46.473 13:26:25 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:02:46.473 13:26:25 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:02:46.473 13:26:25 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:02:46.731 13:26:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:02:46.731 13:26:25 -- json_config/json_config.sh@64 -- # IFS=: 00:02:46.731 13:26:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:46.731 13:26:25 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:02:46.731 13:26:25 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:02:46.731 13:26:25 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:02:46.731 13:26:25 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:02:46.989 Nvme0n1p0 Nvme0n1p1 00:02:46.989 13:26:26 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:02:46.989 13:26:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:02:47.248 [2024-07-10 13:26:26.373590] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:02:47.248 [2024-07-10 13:26:26.373659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:02:47.248 00:02:47.248 13:26:26 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:02:47.248 13:26:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:02:47.506 Malloc3 00:02:47.506 13:26:26 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:02:47.506 13:26:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:02:47.764 [2024-07-10 13:26:26.877625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:02:47.764 [2024-07-10 13:26:26.877695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:47.764 [2024-07-10 13:26:26.877725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x827c40f00 00:02:47.764 [2024-07-10 13:26:26.877733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:47.764 [2024-07-10 13:26:26.878259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:47.764 [2024-07-10 13:26:26.878288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:02:47.764 PTBdevFromMalloc3 00:02:47.764 13:26:26 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:02:47.764 13:26:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:02:47.764 Null0 00:02:47.764 13:26:27 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:02:47.764 13:26:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:02:48.022 Malloc0 00:02:48.022 13:26:27 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:02:48.022 13:26:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:02:48.283 Malloc1 00:02:48.283 13:26:27 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:02:48.283 13:26:27 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:02:48.853 102400+0 records in 00:02:48.853 102400+0 records out 00:02:48.853 104857600 bytes transferred in 0.348119 secs (301211720 bytes/sec) 00:02:48.853 13:26:27 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:02:48.853 13:26:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:02:49.111 aio_disk 00:02:49.111 13:26:28 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:02:49.111 13:26:28 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:02:49.111 13:26:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:02:49.111 0378a8d7-3ec0-11ef-b9c4-5b09e08d4792 00:02:49.371 13:26:28 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:02:49.371 13:26:28 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:02:49.371 13:26:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:02:49.371 13:26:28 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:02:49.371 13:26:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:02:49.631 13:26:28 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:02:49.631 13:26:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:02:49.890 13:26:29 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:02:49.890 13:26:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:02:50.148 13:26:29 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:02:50.148 13:26:29 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:02:50.148 13:26:29 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.148 13:26:29 -- json_config/json_config.sh@70 -- # local events_to_check 00:02:50.148 13:26:29 -- json_config/json_config.sh@71 -- # local recorded_events 00:02:50.148 13:26:29 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:02:50.148 13:26:29 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.148 13:26:29 -- json_config/json_config.sh@74 -- # sort 00:02:50.148 13:26:29 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:02:50.148 13:26:29 -- json_config/json_config.sh@75 -- # sort 00:02:50.148 13:26:29 -- json_config/json_config.sh@75 -- # get_notifications 00:02:50.148 13:26:29 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:02:50.148 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.148 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.148 13:26:29 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:02:50.148 13:26:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:02:50.148 13:26:29 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@65 -- # echo bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # IFS=: 00:02:50.407 13:26:29 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:02:50.407 13:26:29 -- json_config/json_config.sh@77 -- # [[ bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\9\a\3\a\b\b\-\3\e\c\0\-\1\1\e\f\-\b\9\c\4\-\5\b\0\9\e\0\8\d\4\7\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\b\b\3\0\d\5\-\3\e\c\0\-\1\1\e\f\-\b\9\c\4\-\5\b\0\9\e\0\8\d\4\7\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\e\3\7\9\b\1\-\3\e\c\0\-\1\1\e\f\-\b\9\c\4\-\5\b\0\9\e\0\8\d\4\7\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\4\0\d\9\7\8\d\-\3\e\c\0\-\1\1\e\f\-\b\9\c\4\-\5\b\0\9\e\0\8\d\4\7\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:02:50.407 13:26:29 -- json_config/json_config.sh@89 -- # cat 00:02:50.407 13:26:29 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:02:50.407 Expected events matched: 00:02:50.407 bdev_register:039a3abb-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 bdev_register:03bb30d5-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 bdev_register:03e379b1-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 bdev_register:040d978d-3ec0-11ef-b9c4-5b09e08d4792 00:02:50.407 bdev_register:Malloc0 00:02:50.407 bdev_register:Malloc0p0 00:02:50.407 bdev_register:Malloc0p1 00:02:50.407 bdev_register:Malloc0p2 00:02:50.407 bdev_register:Malloc1 00:02:50.407 bdev_register:Malloc3 00:02:50.407 bdev_register:Null0 00:02:50.407 bdev_register:Nvme0n1 00:02:50.407 bdev_register:Nvme0n1p0 00:02:50.407 bdev_register:Nvme0n1p1 00:02:50.407 bdev_register:PTBdevFromMalloc3 00:02:50.407 bdev_register:aio_disk 00:02:50.407 13:26:29 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:02:50.407 13:26:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:50.407 13:26:29 -- common/autotest_common.sh@10 -- # set +x 00:02:50.407 13:26:29 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:02:50.407 13:26:29 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:02:50.407 13:26:29 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:02:50.407 13:26:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:02:50.407 13:26:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:50.407 13:26:29 -- common/autotest_common.sh@10 -- # set +x 00:02:50.666 13:26:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:02:50.666 13:26:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:50.666 13:26:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:50.666 MallocBdevForConfigChangeCheck 00:02:50.666 13:26:30 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:02:50.666 13:26:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:50.666 13:26:30 -- common/autotest_common.sh@10 -- # set +x 00:02:50.954 13:26:30 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:02:50.954 13:26:30 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:51.212 INFO: shutting down applications... 00:02:51.212 13:26:30 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:02:51.212 13:26:30 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:02:51.212 13:26:30 -- json_config/json_config.sh@431 -- # json_config_clear target 00:02:51.212 13:26:30 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:02:51.212 13:26:30 -- json_config/json_config.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:02:51.212 [2024-07-10 13:26:30.541788] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:02:51.470 Calling clear_iscsi_subsystem 00:02:51.470 Calling clear_nvmf_subsystem 00:02:51.470 Calling clear_bdev_subsystem 00:02:51.470 Calling clear_accel_subsystem 00:02:51.470 Calling clear_sock_subsystem 00:02:51.470 Calling clear_scheduler_subsystem 00:02:51.470 Calling clear_iobuf_subsystem 00:02:51.470 Calling clear_vmd_subsystem 00:02:51.470 13:26:30 -- json_config/json_config.sh@390 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:02:51.470 13:26:30 -- json_config/json_config.sh@396 -- # count=100 00:02:51.470 13:26:30 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:02:51.470 13:26:30 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:02:51.470 13:26:30 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:51.470 13:26:30 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:02:51.728 13:26:31 -- json_config/json_config.sh@398 -- # break 00:02:51.728 13:26:31 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:02:51.728 13:26:31 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:02:51.728 13:26:31 -- json_config/json_config.sh@120 -- # local app=target 00:02:51.728 13:26:31 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:02:51.728 13:26:31 -- json_config/json_config.sh@124 -- # [[ -n 45407 ]] 00:02:51.728 13:26:31 -- json_config/json_config.sh@127 -- # kill -SIGINT 45407 00:02:51.728 13:26:31 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:02:51.728 13:26:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:02:51.728 13:26:31 -- json_config/json_config.sh@130 -- # kill -0 45407 00:02:51.728 13:26:31 -- json_config/json_config.sh@134 -- # sleep 0.5 00:02:52.294 13:26:31 -- json_config/json_config.sh@129 -- # (( i++ )) 00:02:52.294 13:26:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:02:52.294 13:26:31 -- json_config/json_config.sh@130 -- # kill -0 45407 00:02:52.294 13:26:31 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:02:52.294 13:26:31 -- json_config/json_config.sh@132 -- # break 00:02:52.294 13:26:31 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:02:52.294 SPDK target shutdown done 00:02:52.294 13:26:31 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:02:52.294 INFO: relaunching applications... 00:02:52.294 13:26:31 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:02:52.294 13:26:31 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:52.294 13:26:31 -- json_config/json_config.sh@98 -- # local app=target 00:02:52.294 13:26:31 -- json_config/json_config.sh@99 -- # shift 00:02:52.294 13:26:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:02:52.294 13:26:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:02:52.294 13:26:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:02:52.294 13:26:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:02:52.294 13:26:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:02:52.294 13:26:31 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:52.294 13:26:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=45565 00:02:52.294 13:26:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:02:52.294 Waiting for target to run... 00:02:52.294 13:26:31 -- json_config/json_config.sh@114 -- # waitforlisten 45565 /var/tmp/spdk_tgt.sock 00:02:52.294 13:26:31 -- common/autotest_common.sh@819 -- # '[' -z 45565 ']' 00:02:52.294 13:26:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:52.294 13:26:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:52.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:52.294 13:26:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:52.294 13:26:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:52.294 13:26:31 -- common/autotest_common.sh@10 -- # set +x 00:02:52.294 [2024-07-10 13:26:31.597553] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:52.294 [2024-07-10 13:26:31.597761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:02:52.552 EAL: TSC is not safe to use in SMP mode 00:02:52.552 EAL: TSC is not invariant 00:02:52.552 [2024-07-10 13:26:31.850127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:52.810 [2024-07-10 13:26:31.943468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:02:52.810 [2024-07-10 13:26:31.943597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:02:52.810 [2024-07-10 13:26:32.073394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:02:52.810 [2024-07-10 13:26:32.073452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:02:52.810 [2024-07-10 13:26:32.081386] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:02:52.810 [2024-07-10 13:26:32.081421] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:02:52.810 [2024-07-10 13:26:32.089404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:02:52.810 [2024-07-10 13:26:32.089442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:02:52.810 [2024-07-10 13:26:32.089449] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:02:52.810 [2024-07-10 13:26:32.097402] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:02:52.810 [2024-07-10 13:26:32.165922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:02:52.810 [2024-07-10 13:26:32.165999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:52.810 [2024-07-10 13:26:32.166024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb71500 00:02:52.810 [2024-07-10 13:26:32.166031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:52.810 [2024-07-10 13:26:32.166099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:52.810 [2024-07-10 13:26:32.166108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:02:53.376 13:26:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:02:53.376 13:26:32 -- common/autotest_common.sh@852 -- # return 0 00:02:53.376 00:02:53.376 13:26:32 -- json_config/json_config.sh@115 -- # echo '' 00:02:53.376 13:26:32 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:02:53.376 INFO: Checking if target configuration is the same... 00:02:53.376 13:26:32 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:02:53.376 13:26:32 -- json_config/json_config.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.kRLz7G /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:53.376 + '[' 2 -ne 2 ']' 00:02:53.376 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:02:53.376 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:02:53.376 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:53.376 +++ basename /tmp//sh-np.kRLz7G 00:02:53.376 ++ mktemp /tmp/sh-np.kRLz7G.XXX 00:02:53.376 + tmp_file_1=/tmp/sh-np.kRLz7G.34r 00:02:53.376 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:53.376 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:53.376 + tmp_file_2=/tmp/spdk_tgt_config.json.fZy 00:02:53.376 + ret=0 00:02:53.376 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:02:53.376 13:26:32 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:02:53.376 13:26:32 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:53.636 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:02:53.894 + diff -u /tmp/sh-np.kRLz7G.34r /tmp/spdk_tgt_config.json.fZy 00:02:53.894 + echo 'INFO: JSON config files are the same' 00:02:53.894 INFO: JSON config files are the same 00:02:53.894 + rm /tmp/sh-np.kRLz7G.34r /tmp/spdk_tgt_config.json.fZy 00:02:53.894 + exit 0 00:02:53.894 13:26:33 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:02:53.894 INFO: changing configuration and checking if this can be detected... 00:02:53.894 13:26:33 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:02:53.894 13:26:33 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:53.894 13:26:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:54.227 13:26:33 -- json_config/json_config.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.tYgRSe /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:54.227 + '[' 2 -ne 2 ']' 00:02:54.227 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:02:54.227 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:02:54.227 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:54.227 +++ basename /tmp//sh-np.tYgRSe 00:02:54.227 ++ mktemp /tmp/sh-np.tYgRSe.XXX 00:02:54.227 + tmp_file_1=/tmp/sh-np.tYgRSe.7E1 00:02:54.227 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:54.227 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:54.227 + tmp_file_2=/tmp/spdk_tgt_config.json.BGY 00:02:54.227 + ret=0 00:02:54.227 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:02:54.227 13:26:33 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:02:54.227 13:26:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:54.496 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:02:54.496 + diff -u /tmp/sh-np.tYgRSe.7E1 /tmp/spdk_tgt_config.json.BGY 00:02:54.496 + ret=1 00:02:54.496 + echo '=== Start of file: /tmp/sh-np.tYgRSe.7E1 ===' 00:02:54.496 + cat /tmp/sh-np.tYgRSe.7E1 00:02:54.496 + echo '=== End of file: /tmp/sh-np.tYgRSe.7E1 ===' 00:02:54.496 + echo '' 00:02:54.496 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BGY ===' 00:02:54.496 + cat /tmp/spdk_tgt_config.json.BGY 00:02:54.496 + echo '=== End of file: /tmp/spdk_tgt_config.json.BGY ===' 00:02:54.496 + echo '' 00:02:54.496 + rm /tmp/sh-np.tYgRSe.7E1 /tmp/spdk_tgt_config.json.BGY 00:02:54.496 + exit 1 00:02:54.496 INFO: configuration change detected. 00:02:54.496 13:26:33 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:02:54.496 13:26:33 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:02:54.496 13:26:33 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:02:54.496 13:26:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:54.496 13:26:33 -- common/autotest_common.sh@10 -- # set +x 00:02:54.496 13:26:33 -- json_config/json_config.sh@360 -- # local ret=0 00:02:54.496 13:26:33 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:02:54.496 13:26:33 -- json_config/json_config.sh@370 -- # [[ -n 45565 ]] 00:02:54.496 13:26:33 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:02:54.496 13:26:33 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:02:54.496 13:26:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:54.496 13:26:33 -- common/autotest_common.sh@10 -- # set +x 00:02:54.496 13:26:33 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:02:54.496 13:26:33 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:02:54.496 13:26:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:02:54.754 13:26:34 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:02:54.754 13:26:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:02:55.011 13:26:34 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:02:55.011 13:26:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:02:55.270 13:26:34 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:02:55.270 13:26:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:02:55.529 13:26:34 -- json_config/json_config.sh@246 -- # uname -s 00:02:55.529 13:26:34 -- json_config/json_config.sh@246 -- # [[ FreeBSD = Linux ]] 00:02:55.529 13:26:34 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:02:55.529 13:26:34 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:02:55.529 13:26:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:55.529 13:26:34 -- common/autotest_common.sh@10 -- # set +x 00:02:55.529 13:26:34 -- json_config/json_config.sh@376 -- # killprocess 45565 00:02:55.529 13:26:34 -- common/autotest_common.sh@926 -- # '[' -z 45565 ']' 00:02:55.529 13:26:34 -- common/autotest_common.sh@930 -- # kill -0 45565 00:02:55.529 13:26:34 -- common/autotest_common.sh@931 -- # uname 00:02:55.529 13:26:34 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:02:55.529 13:26:34 -- common/autotest_common.sh@934 -- # ps -c -o command 45565 00:02:55.529 13:26:34 -- common/autotest_common.sh@934 -- # tail -1 00:02:55.529 13:26:34 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:02:55.529 13:26:34 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:02:55.529 killing process with pid 45565 00:02:55.529 13:26:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45565' 00:02:55.529 13:26:34 -- common/autotest_common.sh@945 -- # kill 45565 00:02:55.529 13:26:34 -- common/autotest_common.sh@950 -- # wait 45565 00:02:55.788 13:26:35 -- json_config/json_config.sh@379 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:02:55.788 13:26:35 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:02:55.788 13:26:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:55.788 13:26:35 -- common/autotest_common.sh@10 -- # set +x 00:02:55.788 13:26:35 -- json_config/json_config.sh@381 -- # return 0 00:02:55.788 INFO: Success 00:02:55.788 13:26:35 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:02:55.788 00:02:55.788 real 0m11.324s 00:02:55.788 user 0m17.785s 00:02:55.788 sys 0m1.842s 00:02:55.788 13:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.788 13:26:35 -- common/autotest_common.sh@10 -- # set +x 00:02:55.788 ************************************ 00:02:55.788 END TEST json_config 00:02:55.788 ************************************ 00:02:55.788 13:26:35 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:02:55.788 13:26:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.788 13:26:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.788 13:26:35 -- common/autotest_common.sh@10 -- # set +x 00:02:55.788 ************************************ 00:02:55.788 START TEST json_config_extra_key 00:02:55.788 ************************************ 00:02:55.788 13:26:35 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:56.047 13:26:35 -- nvmf/common.sh@7 -- # uname -s 00:02:56.047 13:26:35 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:02:56.047 13:26:35 -- nvmf/common.sh@7 -- # return 0 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:02:56.047 INFO: launching applications... 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@25 -- # shift 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=45681 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:02:56.047 Waiting for target to run... 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:02:56.047 13:26:35 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 45681 /var/tmp/spdk_tgt.sock 00:02:56.047 13:26:35 -- common/autotest_common.sh@819 -- # '[' -z 45681 ']' 00:02:56.047 13:26:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:56.047 13:26:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:56.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:56.048 13:26:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:56.048 13:26:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:56.048 13:26:35 -- common/autotest_common.sh@10 -- # set +x 00:02:56.048 [2024-07-10 13:26:35.267436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:56.048 [2024-07-10 13:26:35.267595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:02:56.306 EAL: TSC is not safe to use in SMP mode 00:02:56.306 EAL: TSC is not invariant 00:02:56.306 [2024-07-10 13:26:35.503784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:56.306 [2024-07-10 13:26:35.611773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:02:56.306 [2024-07-10 13:26:35.611932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:02:57.266 13:26:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:02:57.266 13:26:36 -- common/autotest_common.sh@852 -- # return 0 00:02:57.266 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:02:57.266 INFO: shutting down applications... 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 45681 ]] 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 45681 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45681 00:02:57.266 13:26:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45681 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@52 -- # break 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:02:57.832 SPDK target shutdown done 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:02:57.832 Success 00:02:57.832 13:26:36 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:02:57.832 00:02:57.832 real 0m1.832s 00:02:57.832 user 0m1.820s 00:02:57.832 sys 0m0.396s 00:02:57.832 13:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.832 13:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.832 ************************************ 00:02:57.832 END TEST json_config_extra_key 00:02:57.832 ************************************ 00:02:57.832 13:26:36 -- spdk/autotest.sh@180 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:02:57.832 13:26:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.832 13:26:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.832 13:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.832 ************************************ 00:02:57.832 START TEST alias_rpc 00:02:57.832 ************************************ 00:02:57.832 13:26:36 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:02:57.832 * Looking for test storage... 00:02:57.832 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:02:57.832 13:26:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:02:57.832 13:26:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=45730 00:02:57.832 13:26:37 -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:02:57.832 13:26:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 45730 00:02:57.832 13:26:37 -- common/autotest_common.sh@819 -- # '[' -z 45730 ']' 00:02:57.832 13:26:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:57.832 13:26:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:57.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:57.832 13:26:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:57.832 13:26:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:57.832 13:26:37 -- common/autotest_common.sh@10 -- # set +x 00:02:57.832 [2024-07-10 13:26:37.131339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:57.832 [2024-07-10 13:26:37.131554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:02:58.396 EAL: TSC is not safe to use in SMP mode 00:02:58.396 EAL: TSC is not invariant 00:02:58.396 [2024-07-10 13:26:37.621982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:58.397 [2024-07-10 13:26:37.708343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:02:58.397 [2024-07-10 13:26:37.708450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:02:58.962 13:26:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:02:58.962 13:26:38 -- common/autotest_common.sh@852 -- # return 0 00:02:58.962 13:26:38 -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:02:59.239 13:26:38 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 45730 00:02:59.239 13:26:38 -- common/autotest_common.sh@926 -- # '[' -z 45730 ']' 00:02:59.239 13:26:38 -- common/autotest_common.sh@930 -- # kill -0 45730 00:02:59.239 13:26:38 -- common/autotest_common.sh@931 -- # uname 00:02:59.239 13:26:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:02:59.239 13:26:38 -- common/autotest_common.sh@934 -- # ps -c -o command 45730 00:02:59.239 13:26:38 -- common/autotest_common.sh@934 -- # tail -1 00:02:59.239 13:26:38 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:02:59.239 13:26:38 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:02:59.239 killing process with pid 45730 00:02:59.239 13:26:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45730' 00:02:59.239 13:26:38 -- common/autotest_common.sh@945 -- # kill 45730 00:02:59.239 13:26:38 -- common/autotest_common.sh@950 -- # wait 45730 00:02:59.499 00:02:59.499 real 0m1.710s 00:02:59.499 user 0m1.878s 00:02:59.499 sys 0m0.706s 00:02:59.499 13:26:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.499 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.499 ************************************ 00:02:59.499 END TEST alias_rpc 00:02:59.499 ************************************ 00:02:59.499 13:26:38 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:02:59.499 13:26:38 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:02:59.499 13:26:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:59.499 13:26:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:59.499 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.499 ************************************ 00:02:59.499 START TEST spdkcli_tcp 00:02:59.499 ************************************ 00:02:59.499 13:26:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:02:59.756 * Looking for test storage... 00:02:59.756 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:02:59.756 13:26:38 -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:02:59.756 13:26:38 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:02:59.756 13:26:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:59.756 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=45786 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@27 -- # waitforlisten 45786 00:02:59.756 13:26:38 -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:02:59.756 13:26:38 -- common/autotest_common.sh@819 -- # '[' -z 45786 ']' 00:02:59.756 13:26:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:59.756 13:26:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:02:59.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:59.756 13:26:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:59.756 13:26:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:02:59.756 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.756 [2024-07-10 13:26:38.881655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:02:59.756 [2024-07-10 13:26:38.881875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:00.014 EAL: TSC is not safe to use in SMP mode 00:03:00.014 EAL: TSC is not invariant 00:03:00.014 [2024-07-10 13:26:39.373527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:00.273 [2024-07-10 13:26:39.469882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:00.273 [2024-07-10 13:26:39.470123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:00.273 [2024-07-10 13:26:39.470126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:00.840 13:26:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:00.840 13:26:39 -- common/autotest_common.sh@852 -- # return 0 00:03:00.840 13:26:39 -- spdkcli/tcp.sh@31 -- # socat_pid=45790 00:03:00.840 13:26:39 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:00.840 13:26:39 -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:00.840 [ 00:03:00.840 "spdk_get_version", 00:03:00.840 "rpc_get_methods", 00:03:00.840 "env_dpdk_get_mem_stats", 00:03:00.840 "trace_get_info", 00:03:00.840 "trace_get_tpoint_group_mask", 00:03:00.840 "trace_disable_tpoint_group", 00:03:00.840 "trace_enable_tpoint_group", 00:03:00.840 "trace_clear_tpoint_mask", 00:03:00.840 "trace_set_tpoint_mask", 00:03:00.840 "notify_get_notifications", 00:03:00.840 "notify_get_types", 00:03:00.840 "accel_get_stats", 00:03:00.840 "accel_set_options", 00:03:00.840 "accel_set_driver", 00:03:00.840 "accel_crypto_key_destroy", 00:03:00.840 "accel_crypto_keys_get", 00:03:00.840 "accel_crypto_key_create", 00:03:00.840 "accel_assign_opc", 00:03:00.840 "accel_get_module_info", 00:03:00.840 "accel_get_opc_assignments", 00:03:00.840 "bdev_get_histogram", 00:03:00.840 "bdev_enable_histogram", 00:03:00.840 "bdev_set_qos_limit", 00:03:00.840 "bdev_set_qd_sampling_period", 00:03:00.840 "bdev_get_bdevs", 00:03:00.840 "bdev_reset_iostat", 00:03:00.840 "bdev_get_iostat", 00:03:00.840 "bdev_examine", 00:03:00.840 "bdev_wait_for_examine", 00:03:00.840 "bdev_set_options", 00:03:00.840 "sock_set_default_impl", 00:03:00.840 "sock_impl_set_options", 00:03:00.840 "sock_impl_get_options", 00:03:00.840 "framework_get_pci_devices", 00:03:00.840 "framework_get_config", 00:03:00.840 "framework_get_subsystems", 00:03:00.840 "thread_set_cpumask", 00:03:00.840 "framework_get_scheduler", 00:03:00.840 "framework_set_scheduler", 00:03:00.840 "framework_get_reactors", 00:03:00.840 "thread_get_io_channels", 00:03:00.840 "thread_get_pollers", 00:03:00.840 "thread_get_stats", 00:03:00.840 "framework_monitor_context_switch", 00:03:00.840 "spdk_kill_instance", 00:03:00.840 "log_enable_timestamps", 00:03:00.840 "log_get_flags", 00:03:00.840 "log_clear_flag", 00:03:00.840 "log_set_flag", 00:03:00.840 "log_get_level", 00:03:00.840 "log_set_level", 00:03:00.840 "log_get_print_level", 00:03:00.840 "log_set_print_level", 00:03:00.840 "framework_enable_cpumask_locks", 00:03:00.840 "framework_disable_cpumask_locks", 00:03:00.840 "framework_wait_init", 00:03:00.840 "framework_start_init", 00:03:00.840 "iobuf_get_stats", 00:03:00.840 "iobuf_set_options", 00:03:00.840 "vmd_rescan", 00:03:00.840 "vmd_remove_device", 00:03:00.840 "vmd_enable", 00:03:00.840 "nvmf_subsystem_get_listeners", 00:03:00.840 "nvmf_subsystem_get_qpairs", 00:03:00.840 "nvmf_subsystem_get_controllers", 00:03:00.840 "nvmf_get_stats", 00:03:00.840 "nvmf_get_transports", 00:03:00.840 "nvmf_create_transport", 00:03:00.840 "nvmf_get_targets", 00:03:00.840 "nvmf_delete_target", 00:03:00.840 "nvmf_create_target", 00:03:00.840 "nvmf_subsystem_allow_any_host", 00:03:00.840 "nvmf_subsystem_remove_host", 00:03:00.840 "nvmf_subsystem_add_host", 00:03:00.840 "nvmf_subsystem_remove_ns", 00:03:00.840 "nvmf_subsystem_add_ns", 00:03:00.840 "nvmf_subsystem_listener_set_ana_state", 00:03:00.840 "nvmf_discovery_get_referrals", 00:03:00.840 "nvmf_discovery_remove_referral", 00:03:00.840 "nvmf_discovery_add_referral", 00:03:00.840 "nvmf_subsystem_remove_listener", 00:03:00.840 "nvmf_subsystem_add_listener", 00:03:00.840 "nvmf_delete_subsystem", 00:03:00.840 "nvmf_create_subsystem", 00:03:00.840 "nvmf_get_subsystems", 00:03:00.840 "nvmf_set_crdt", 00:03:00.840 "nvmf_set_config", 00:03:00.840 "nvmf_set_max_subsystems", 00:03:00.840 "scsi_get_devices", 00:03:00.840 "iscsi_set_options", 00:03:00.840 "iscsi_get_auth_groups", 00:03:00.840 "iscsi_auth_group_remove_secret", 00:03:00.840 "iscsi_auth_group_add_secret", 00:03:00.840 "iscsi_delete_auth_group", 00:03:00.840 "iscsi_create_auth_group", 00:03:00.840 "iscsi_set_discovery_auth", 00:03:00.840 "iscsi_get_options", 00:03:00.840 "iscsi_target_node_request_logout", 00:03:00.840 "iscsi_target_node_set_redirect", 00:03:00.840 "iscsi_target_node_set_auth", 00:03:00.840 "iscsi_target_node_add_lun", 00:03:00.840 "iscsi_get_connections", 00:03:00.840 "iscsi_portal_group_set_auth", 00:03:00.840 "iscsi_start_portal_group", 00:03:00.840 "iscsi_delete_portal_group", 00:03:00.840 "iscsi_create_portal_group", 00:03:00.840 "iscsi_get_portal_groups", 00:03:00.840 "iscsi_delete_target_node", 00:03:00.840 "iscsi_target_node_remove_pg_ig_maps", 00:03:00.840 "iscsi_target_node_add_pg_ig_maps", 00:03:00.840 "iscsi_create_target_node", 00:03:00.840 "iscsi_get_target_nodes", 00:03:00.840 "iscsi_delete_initiator_group", 00:03:00.840 "iscsi_initiator_group_remove_initiators", 00:03:00.840 "iscsi_initiator_group_add_initiators", 00:03:00.840 "iscsi_create_initiator_group", 00:03:00.840 "iscsi_get_initiator_groups", 00:03:00.840 "iaa_scan_accel_module", 00:03:00.840 "dsa_scan_accel_module", 00:03:00.840 "ioat_scan_accel_module", 00:03:00.840 "accel_error_inject_error", 00:03:00.840 "bdev_aio_delete", 00:03:00.840 "bdev_aio_rescan", 00:03:00.840 "bdev_aio_create", 00:03:00.840 "blobfs_create", 00:03:00.840 "blobfs_detect", 00:03:00.840 "blobfs_set_cache_size", 00:03:00.840 "bdev_zone_block_delete", 00:03:00.840 "bdev_zone_block_create", 00:03:00.840 "bdev_delay_delete", 00:03:00.840 "bdev_delay_create", 00:03:00.840 "bdev_delay_update_latency", 00:03:00.840 "bdev_split_delete", 00:03:00.840 "bdev_split_create", 00:03:00.840 "bdev_error_inject_error", 00:03:00.840 "bdev_error_delete", 00:03:00.840 "bdev_error_create", 00:03:00.840 "bdev_raid_set_options", 00:03:00.840 "bdev_raid_remove_base_bdev", 00:03:00.840 "bdev_raid_add_base_bdev", 00:03:00.840 "bdev_raid_delete", 00:03:00.840 "bdev_raid_create", 00:03:00.840 "bdev_raid_get_bdevs", 00:03:00.840 "bdev_lvol_grow_lvstore", 00:03:00.840 "bdev_lvol_get_lvols", 00:03:00.840 "bdev_lvol_get_lvstores", 00:03:00.840 "bdev_lvol_delete", 00:03:00.840 "bdev_lvol_set_read_only", 00:03:00.840 "bdev_lvol_resize", 00:03:00.840 "bdev_lvol_decouple_parent", 00:03:00.840 "bdev_lvol_inflate", 00:03:00.840 "bdev_lvol_rename", 00:03:00.840 "bdev_lvol_clone_bdev", 00:03:00.840 "bdev_lvol_clone", 00:03:00.840 "bdev_lvol_snapshot", 00:03:00.840 "bdev_lvol_create", 00:03:00.840 "bdev_lvol_delete_lvstore", 00:03:00.840 "bdev_lvol_rename_lvstore", 00:03:00.840 "bdev_lvol_create_lvstore", 00:03:00.840 "bdev_passthru_delete", 00:03:00.840 "bdev_passthru_create", 00:03:00.840 "bdev_nvme_send_cmd", 00:03:00.840 "bdev_nvme_get_path_iostat", 00:03:00.840 "bdev_nvme_get_mdns_discovery_info", 00:03:00.840 "bdev_nvme_stop_mdns_discovery", 00:03:00.840 "bdev_nvme_start_mdns_discovery", 00:03:00.840 "bdev_nvme_set_multipath_policy", 00:03:00.840 "bdev_nvme_set_preferred_path", 00:03:00.840 "bdev_nvme_get_io_paths", 00:03:00.840 "bdev_nvme_remove_error_injection", 00:03:00.840 "bdev_nvme_add_error_injection", 00:03:00.840 "bdev_nvme_get_discovery_info", 00:03:00.840 "bdev_nvme_stop_discovery", 00:03:00.840 "bdev_nvme_start_discovery", 00:03:00.840 "bdev_nvme_get_controller_health_info", 00:03:00.840 "bdev_nvme_disable_controller", 00:03:00.840 "bdev_nvme_enable_controller", 00:03:00.840 "bdev_nvme_reset_controller", 00:03:00.840 "bdev_nvme_get_transport_statistics", 00:03:00.840 "bdev_nvme_apply_firmware", 00:03:00.840 "bdev_nvme_detach_controller", 00:03:00.840 "bdev_nvme_get_controllers", 00:03:00.840 "bdev_nvme_attach_controller", 00:03:00.840 "bdev_nvme_set_hotplug", 00:03:00.840 "bdev_nvme_set_options", 00:03:00.840 "bdev_null_resize", 00:03:00.840 "bdev_null_delete", 00:03:00.840 "bdev_null_create", 00:03:00.840 "bdev_malloc_delete", 00:03:00.840 "bdev_malloc_create" 00:03:00.840 ] 00:03:00.840 13:26:40 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:00.840 13:26:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:00.841 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:00.841 13:26:40 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:00.841 13:26:40 -- spdkcli/tcp.sh@38 -- # killprocess 45786 00:03:00.841 13:26:40 -- common/autotest_common.sh@926 -- # '[' -z 45786 ']' 00:03:00.841 13:26:40 -- common/autotest_common.sh@930 -- # kill -0 45786 00:03:00.841 13:26:40 -- common/autotest_common.sh@931 -- # uname 00:03:00.841 13:26:40 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:00.841 13:26:40 -- common/autotest_common.sh@934 -- # tail -1 00:03:00.841 13:26:40 -- common/autotest_common.sh@934 -- # ps -c -o command 45786 00:03:00.841 13:26:40 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:00.841 13:26:40 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:00.841 killing process with pid 45786 00:03:00.841 13:26:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45786' 00:03:00.841 13:26:40 -- common/autotest_common.sh@945 -- # kill 45786 00:03:00.841 13:26:40 -- common/autotest_common.sh@950 -- # wait 45786 00:03:01.099 00:03:01.099 real 0m1.663s 00:03:01.099 user 0m2.566s 00:03:01.099 sys 0m0.740s 00:03:01.099 13:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.099 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.099 ************************************ 00:03:01.099 END TEST spdkcli_tcp 00:03:01.099 ************************************ 00:03:01.099 13:26:40 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:01.099 13:26:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.099 13:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.099 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.099 ************************************ 00:03:01.099 START TEST dpdk_mem_utility 00:03:01.099 ************************************ 00:03:01.099 13:26:40 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:01.357 * Looking for test storage... 00:03:01.358 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:03:01.358 13:26:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:01.358 13:26:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:01.358 13:26:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=45856 00:03:01.358 13:26:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 45856 00:03:01.358 13:26:40 -- common/autotest_common.sh@819 -- # '[' -z 45856 ']' 00:03:01.358 13:26:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:01.358 13:26:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:01.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:01.358 13:26:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:01.358 13:26:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:01.358 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.358 [2024-07-10 13:26:40.628773] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:01.358 [2024-07-10 13:26:40.628974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:01.935 EAL: TSC is not safe to use in SMP mode 00:03:01.935 EAL: TSC is not invariant 00:03:01.935 [2024-07-10 13:26:41.092961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:01.935 [2024-07-10 13:26:41.193593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:01.935 [2024-07-10 13:26:41.193704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:02.274 13:26:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:02.274 13:26:41 -- common/autotest_common.sh@852 -- # return 0 00:03:02.274 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:02.274 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:02.274 13:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:02.274 13:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:02.534 { 00:03:02.534 "filename": "/tmp/spdk_mem_dump.txt" 00:03:02.534 } 00:03:02.534 13:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:02.534 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:02.534 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:03:02.534 1 heaps totaling size 2048.000000 MiB 00:03:02.534 size: 2048.000000 MiB heap id: 0 00:03:02.534 end heaps---------- 00:03:02.534 8 mempools totaling size 592.563660 MiB 00:03:02.534 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:03:02.534 size: 153.489014 MiB name: PDU_data_out_Pool 00:03:02.534 size: 84.500549 MiB name: bdev_io_45856 00:03:02.534 size: 51.008362 MiB name: evtpool_45856 00:03:02.534 size: 50.000549 MiB name: msgpool_45856 00:03:02.534 size: 21.758911 MiB name: PDU_Pool 00:03:02.534 size: 19.508911 MiB name: SCSI_TASK_Pool 00:03:02.534 size: 0.026123 MiB name: Session_Pool 00:03:02.534 end mempools------- 00:03:02.534 6 memzones totaling size 4.142822 MiB 00:03:02.534 size: 1.000366 MiB name: RG_ring_0_45856 00:03:02.534 size: 1.000366 MiB name: RG_ring_1_45856 00:03:02.534 size: 1.000366 MiB name: RG_ring_4_45856 00:03:02.534 size: 1.000366 MiB name: RG_ring_5_45856 00:03:02.534 size: 0.125366 MiB name: RG_ring_2_45856 00:03:02.534 size: 0.015991 MiB name: RG_ring_3_45856 00:03:02.534 end memzones------- 00:03:02.534 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:03:02.534 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:03:02.534 list of free elements. size: 1254.071899 MiB 00:03:02.534 element at address: 0x1060000000 with size: 1254.001099 MiB 00:03:02.534 element at address: 0x10c8000000 with size: 0.070129 MiB 00:03:02.534 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:03:02.534 list of standard malloc elements. size: 197.217957 MiB 00:03:02.534 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:03:02.534 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:03:02.534 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:03:02.534 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:03:02.534 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:03:02.534 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:03:02.534 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:03:02.534 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:03:02.534 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:03:02.534 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:03:02.534 list of memzone associated elements. size: 596.710144 MiB 00:03:02.534 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:03:02.534 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:03:02.534 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:03:02.534 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:03:02.534 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:03:02.534 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_45856_0 00:03:02.534 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:03:02.534 associated memzone info: size: 48.000000 MiB name: MP_evtpool_45856_0 00:03:02.534 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:03:02.534 associated memzone info: size: 48.000000 MiB name: MP_msgpool_45856_0 00:03:02.534 element at address: 0x10c683d780 with size: 20.250671 MiB 00:03:02.534 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:03:02.534 element at address: 0x10ae700680 with size: 18.000671 MiB 00:03:02.534 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:03:02.534 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:03:02.534 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_45856 00:03:02.534 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:03:02.534 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_45856 00:03:02.534 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:03:02.534 associated memzone info: size: 1.007996 MiB name: MP_evtpool_45856 00:03:02.534 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:03:02.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:02.534 element at address: 0x10c673b640 with size: 1.008118 MiB 00:03:02.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:02.534 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:03:02.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:02.534 element at address: 0x10af980b40 with size: 1.008118 MiB 00:03:02.534 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:02.534 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:03:02.534 associated memzone info: size: 1.000366 MiB name: RG_ring_0_45856 00:03:02.535 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:03:02.535 associated memzone info: size: 1.000366 MiB name: RG_ring_1_45856 00:03:02.535 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:03:02.535 associated memzone info: size: 1.000366 MiB name: RG_ring_4_45856 00:03:02.535 element at address: 0x10ae600480 with size: 1.000488 MiB 00:03:02.535 associated memzone info: size: 1.000366 MiB name: RG_ring_5_45856 00:03:02.535 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:03:02.535 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_45856 00:03:02.535 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:03:02.535 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:02.535 element at address: 0x10af900940 with size: 0.500488 MiB 00:03:02.535 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:02.535 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:03:02.535 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:02.535 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:03:02.535 associated memzone info: size: 0.125366 MiB name: RG_ring_2_45856 00:03:02.535 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:03:02.535 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:02.535 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:03:02.535 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:02.535 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:03:02.535 associated memzone info: size: 0.015991 MiB name: RG_ring_3_45856 00:03:02.535 element at address: 0x10c8018080 with size: 0.002441 MiB 00:03:02.535 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:02.535 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:03:02.535 associated memzone info: size: 0.000183 MiB name: MP_msgpool_45856 00:03:02.535 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:03:02.535 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_45856 00:03:02.535 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:03:02.535 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:02.535 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:02.535 13:26:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 45856 00:03:02.535 13:26:41 -- common/autotest_common.sh@926 -- # '[' -z 45856 ']' 00:03:02.535 13:26:41 -- common/autotest_common.sh@930 -- # kill -0 45856 00:03:02.535 13:26:41 -- common/autotest_common.sh@931 -- # uname 00:03:02.535 13:26:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:02.535 13:26:41 -- common/autotest_common.sh@934 -- # ps -c -o command 45856 00:03:02.535 13:26:41 -- common/autotest_common.sh@934 -- # tail -1 00:03:02.535 13:26:41 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:02.535 13:26:41 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:02.535 killing process with pid 45856 00:03:02.535 13:26:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45856' 00:03:02.535 13:26:41 -- common/autotest_common.sh@945 -- # kill 45856 00:03:02.535 13:26:41 -- common/autotest_common.sh@950 -- # wait 45856 00:03:02.793 00:03:02.793 real 0m1.576s 00:03:02.793 user 0m1.606s 00:03:02.793 sys 0m0.689s 00:03:02.793 13:26:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.793 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:03:02.793 ************************************ 00:03:02.793 END TEST dpdk_mem_utility 00:03:02.793 ************************************ 00:03:02.793 13:26:42 -- spdk/autotest.sh@187 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:02.793 13:26:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:02.793 13:26:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:02.793 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:03:02.793 ************************************ 00:03:02.793 START TEST event 00:03:02.793 ************************************ 00:03:02.793 13:26:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:03.051 * Looking for test storage... 00:03:03.051 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:03:03.051 13:26:42 -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:03:03.051 13:26:42 -- bdev/nbd_common.sh@6 -- # set -e 00:03:03.051 13:26:42 -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:03.051 13:26:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:03:03.051 13:26:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.051 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:03:03.051 ************************************ 00:03:03.051 START TEST event_perf 00:03:03.051 ************************************ 00:03:03.051 13:26:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:03.051 Running I/O for 1 seconds...[2024-07-10 13:26:42.251788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:03.051 [2024-07-10 13:26:42.252304] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:03.616 EAL: TSC is not safe to use in SMP mode 00:03:03.616 EAL: TSC is not invariant 00:03:03.616 [2024-07-10 13:26:42.713073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:03.616 [2024-07-10 13:26:42.808611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:03.616 [2024-07-10 13:26:42.808969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:03.616 [2024-07-10 13:26:42.808766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:03:03.616 [2024-07-10 13:26:42.808970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:03:04.551 Running I/O for 1 seconds... 00:03:04.551 lcore 0: 2209982 00:03:04.551 lcore 1: 2209982 00:03:04.551 lcore 2: 2209981 00:03:04.551 lcore 3: 2209982 00:03:04.551 done. 00:03:04.551 00:03:04.551 real 0m1.667s 00:03:04.551 user 0m4.168s 00:03:04.551 sys 0m0.497s 00:03:04.551 13:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.551 13:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.551 ************************************ 00:03:04.551 END TEST event_perf 00:03:04.551 ************************************ 00:03:04.809 13:26:43 -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:04.809 13:26:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:04.809 13:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.809 13:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.809 ************************************ 00:03:04.809 START TEST event_reactor 00:03:04.809 ************************************ 00:03:04.809 13:26:43 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:04.809 [2024-07-10 13:26:43.963091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:04.809 [2024-07-10 13:26:43.963437] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:05.376 EAL: TSC is not safe to use in SMP mode 00:03:05.376 EAL: TSC is not invariant 00:03:05.376 [2024-07-10 13:26:44.435301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:05.376 [2024-07-10 13:26:44.531756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:06.373 test_start 00:03:06.373 oneshot 00:03:06.373 tick 100 00:03:06.373 tick 100 00:03:06.373 tick 250 00:03:06.373 tick 100 00:03:06.373 tick 100 00:03:06.373 tick 100 00:03:06.373 tick 250 00:03:06.373 tick 500 00:03:06.373 tick 100 00:03:06.373 tick 100 00:03:06.373 tick 250 00:03:06.373 tick 100 00:03:06.373 tick 100 00:03:06.373 test_end 00:03:06.373 00:03:06.373 real 0m1.681s 00:03:06.373 user 0m1.167s 00:03:06.373 sys 0m0.512s 00:03:06.373 13:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.373 13:26:45 -- common/autotest_common.sh@10 -- # set +x 00:03:06.373 ************************************ 00:03:06.373 END TEST event_reactor 00:03:06.373 ************************************ 00:03:06.373 13:26:45 -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:06.373 13:26:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:06.373 13:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.373 13:26:45 -- common/autotest_common.sh@10 -- # set +x 00:03:06.373 ************************************ 00:03:06.373 START TEST event_reactor_perf 00:03:06.373 ************************************ 00:03:06.373 13:26:45 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:06.373 [2024-07-10 13:26:45.673997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:06.373 [2024-07-10 13:26:45.674240] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:06.955 EAL: TSC is not safe to use in SMP mode 00:03:06.955 EAL: TSC is not invariant 00:03:06.955 [2024-07-10 13:26:46.154179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:06.955 [2024-07-10 13:26:46.250148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:08.330 test_start 00:03:08.330 test_end 00:03:08.330 Performance: 3477373 events per second 00:03:08.330 00:03:08.330 real 0m1.690s 00:03:08.330 user 0m1.164s 00:03:08.330 sys 0m0.524s 00:03:08.330 13:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.330 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.330 ************************************ 00:03:08.330 END TEST event_reactor_perf 00:03:08.330 ************************************ 00:03:08.330 13:26:47 -- event/event.sh@49 -- # uname -s 00:03:08.330 13:26:47 -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:03:08.330 00:03:08.330 real 0m5.341s 00:03:08.330 user 0m6.652s 00:03:08.330 sys 0m1.746s 00:03:08.330 13:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.330 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.330 ************************************ 00:03:08.330 END TEST event 00:03:08.330 ************************************ 00:03:08.330 13:26:47 -- spdk/autotest.sh@188 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:03:08.330 13:26:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.330 13:26:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.330 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.330 ************************************ 00:03:08.330 START TEST thread 00:03:08.330 ************************************ 00:03:08.330 13:26:47 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:03:08.330 * Looking for test storage... 00:03:08.330 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:03:08.330 13:26:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:08.330 13:26:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:03:08.330 13:26:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.330 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.330 ************************************ 00:03:08.330 START TEST thread_poller_perf 00:03:08.330 ************************************ 00:03:08.330 13:26:47 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:08.330 [2024-07-10 13:26:47.612514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:08.330 [2024-07-10 13:26:47.612704] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:08.898 EAL: TSC is not safe to use in SMP mode 00:03:08.898 EAL: TSC is not invariant 00:03:08.898 [2024-07-10 13:26:48.086640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:08.898 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:03:08.898 [2024-07-10 13:26:48.190872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:10.292 ====================================== 00:03:10.292 busy:2296973548 (cyc) 00:03:10.292 total_run_count: 5593000 00:03:10.292 tsc_hz: 2294610885 (cyc) 00:03:10.292 ====================================== 00:03:10.292 poller_cost: 410 (cyc), 178 (nsec) 00:03:10.292 00:03:10.292 real 0m1.687s 00:03:10.292 user 0m1.176s 00:03:10.292 sys 0m0.509s 00:03:10.292 13:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.292 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:10.292 ************************************ 00:03:10.292 END TEST thread_poller_perf 00:03:10.292 ************************************ 00:03:10.292 13:26:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:10.292 13:26:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:03:10.292 13:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.292 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:10.292 ************************************ 00:03:10.292 START TEST thread_poller_perf 00:03:10.292 ************************************ 00:03:10.292 13:26:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:10.292 [2024-07-10 13:26:49.328886] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:10.292 [2024-07-10 13:26:49.329095] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:10.550 EAL: TSC is not safe to use in SMP mode 00:03:10.550 EAL: TSC is not invariant 00:03:10.550 [2024-07-10 13:26:49.844269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:10.809 [2024-07-10 13:26:49.931001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:10.809 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:03:11.745 ====================================== 00:03:11.745 busy:2296059790 (cyc) 00:03:11.745 total_run_count: 77411000 00:03:11.745 tsc_hz: 2294610885 (cyc) 00:03:11.745 ====================================== 00:03:11.745 poller_cost: 29 (cyc), 12 (nsec) 00:03:11.745 00:03:11.745 real 0m1.711s 00:03:11.745 user 0m1.158s 00:03:11.745 sys 0m0.552s 00:03:11.745 13:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.745 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:03:11.745 ************************************ 00:03:11.745 END TEST thread_poller_perf 00:03:11.745 ************************************ 00:03:11.745 13:26:51 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:03:11.745 13:26:51 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:03:11.745 13:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.745 13:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.745 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:03:11.745 ************************************ 00:03:11.745 START TEST thread_spdk_lock 00:03:11.745 ************************************ 00:03:11.745 13:26:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:03:11.745 [2024-07-10 13:26:51.071725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:11.745 [2024-07-10 13:26:51.071969] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:12.311 EAL: TSC is not safe to use in SMP mode 00:03:12.311 EAL: TSC is not invariant 00:03:12.311 [2024-07-10 13:26:51.539403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:12.311 [2024-07-10 13:26:51.643075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:12.311 [2024-07-10 13:26:51.643056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:12.901 [2024-07-10 13:26:52.084364] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:12.901 [2024-07-10 13:26:52.084447] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:12.901 [2024-07-10 13:26:52.084457] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x30ee20 00:03:12.901 [2024-07-10 13:26:52.084820] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:12.901 [2024-07-10 13:26:52.084919] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:12.901 [2024-07-10 13:26:52.084935] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:12.901 Starting test contend 00:03:12.901 Worker Delay Wait us Hold us Total us 00:03:12.901 0 3 261062 164080 425143 00:03:12.901 1 5 161953 265005 426959 00:03:12.901 PASS test contend 00:03:12.901 Starting test hold_by_poller 00:03:12.901 PASS test hold_by_poller 00:03:12.901 Starting test hold_by_message 00:03:12.901 PASS test hold_by_message 00:03:12.901 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:03:12.901 100014 assertions passed 00:03:12.901 0 assertions failed 00:03:12.901 00:03:12.901 real 0m1.120s 00:03:12.901 user 0m1.048s 00:03:12.901 sys 0m0.512s 00:03:12.901 13:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.901 13:26:52 -- common/autotest_common.sh@10 -- # set +x 00:03:12.901 ************************************ 00:03:12.901 END TEST thread_spdk_lock 00:03:12.901 ************************************ 00:03:12.901 00:03:12.901 real 0m4.778s 00:03:12.901 user 0m3.541s 00:03:12.901 sys 0m1.739s 00:03:12.901 13:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.901 13:26:52 -- common/autotest_common.sh@10 -- # set +x 00:03:12.901 ************************************ 00:03:12.901 END TEST thread 00:03:12.901 ************************************ 00:03:12.901 13:26:52 -- spdk/autotest.sh@189 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:03:12.901 13:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.901 13:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.901 13:26:52 -- common/autotest_common.sh@10 -- # set +x 00:03:12.901 ************************************ 00:03:12.901 START TEST accel 00:03:12.901 ************************************ 00:03:12.901 13:26:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:03:13.160 * Looking for test storage... 00:03:13.160 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:03:13.160 13:26:52 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:03:13.160 13:26:52 -- accel/accel.sh@74 -- # get_expected_opcs 00:03:13.160 13:26:52 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:13.160 13:26:52 -- accel/accel.sh@59 -- # spdk_tgt_pid=46109 00:03:13.160 13:26:52 -- accel/accel.sh@60 -- # waitforlisten 46109 00:03:13.160 13:26:52 -- accel/accel.sh@58 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.RtN5w2 00:03:13.160 13:26:52 -- common/autotest_common.sh@819 -- # '[' -z 46109 ']' 00:03:13.160 13:26:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:13.160 13:26:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:13.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:13.160 13:26:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:13.160 13:26:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:13.160 13:26:52 -- common/autotest_common.sh@10 -- # set +x 00:03:13.160 [2024-07-10 13:26:52.440031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:13.160 [2024-07-10 13:26:52.440252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:13.725 EAL: TSC is not safe to use in SMP mode 00:03:13.725 EAL: TSC is not invariant 00:03:13.725 [2024-07-10 13:26:52.908979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.725 [2024-07-10 13:26:53.003914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:13.725 [2024-07-10 13:26:53.004024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:13.725 13:26:53 -- accel/accel.sh@58 -- # build_accel_config 00:03:13.725 13:26:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:13.725 13:26:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:13.725 13:26:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:13.725 13:26:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:13.725 13:26:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:13.725 13:26:53 -- accel/accel.sh@41 -- # local IFS=, 00:03:13.725 13:26:53 -- accel/accel.sh@42 -- # jq -r . 00:03:14.386 13:26:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:14.386 13:26:53 -- common/autotest_common.sh@852 -- # return 0 00:03:14.386 13:26:53 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:03:14.386 13:26:53 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:03:14.386 13:26:53 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:03:14.386 13:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:14.386 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:14.386 13:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # IFS== 00:03:14.386 13:26:53 -- accel/accel.sh@64 -- # read -r opc module 00:03:14.386 13:26:53 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:14.386 13:26:53 -- accel/accel.sh@67 -- # killprocess 46109 00:03:14.386 13:26:53 -- common/autotest_common.sh@926 -- # '[' -z 46109 ']' 00:03:14.386 13:26:53 -- common/autotest_common.sh@930 -- # kill -0 46109 00:03:14.386 13:26:53 -- common/autotest_common.sh@931 -- # uname 00:03:14.386 13:26:53 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:14.386 13:26:53 -- common/autotest_common.sh@934 -- # ps -c -o command 46109 00:03:14.386 13:26:53 -- common/autotest_common.sh@934 -- # tail -1 00:03:14.386 13:26:53 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:14.386 13:26:53 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:14.386 killing process with pid 46109 00:03:14.386 13:26:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46109' 00:03:14.386 13:26:53 -- common/autotest_common.sh@945 -- # kill 46109 00:03:14.386 13:26:53 -- common/autotest_common.sh@950 -- # wait 46109 00:03:14.644 13:26:53 -- accel/accel.sh@68 -- # trap - ERR 00:03:14.644 13:26:53 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:03:14.644 13:26:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:03:14.644 13:26:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:14.644 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:14.644 13:26:53 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:03:14.644 13:26:53 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AVQw1j -h 00:03:14.644 13:26:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.644 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:14.644 13:26:53 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:03:14.644 13:26:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:14.644 13:26:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:14.644 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:14.644 ************************************ 00:03:14.644 START TEST accel_missing_filename 00:03:14.644 ************************************ 00:03:14.644 13:26:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:03:14.644 13:26:53 -- common/autotest_common.sh@640 -- # local es=0 00:03:14.644 13:26:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:03:14.644 13:26:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:14.644 13:26:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:14.644 13:26:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:14.644 13:26:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:14.644 13:26:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:03:14.644 13:26:53 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.itmc5n -t 1 -w compress 00:03:14.644 [2024-07-10 13:26:53.814694] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:14.644 [2024-07-10 13:26:53.814926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:15.210 EAL: TSC is not safe to use in SMP mode 00:03:15.210 EAL: TSC is not invariant 00:03:15.210 [2024-07-10 13:26:54.278696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.210 [2024-07-10 13:26:54.388171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.210 13:26:54 -- accel/accel.sh@12 -- # build_accel_config 00:03:15.210 13:26:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:15.210 13:26:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:15.210 13:26:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:15.210 13:26:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:15.210 13:26:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:15.210 13:26:54 -- accel/accel.sh@41 -- # local IFS=, 00:03:15.210 13:26:54 -- accel/accel.sh@42 -- # jq -r . 00:03:15.210 [2024-07-10 13:26:54.397337] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:15.210 [2024-07-10 13:26:54.427856] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:03:15.210 A filename is required. 00:03:15.210 13:26:54 -- common/autotest_common.sh@643 -- # es=234 00:03:15.210 13:26:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:15.210 13:26:54 -- common/autotest_common.sh@652 -- # es=106 00:03:15.210 13:26:54 -- common/autotest_common.sh@653 -- # case "$es" in 00:03:15.210 13:26:54 -- common/autotest_common.sh@660 -- # es=1 00:03:15.210 13:26:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:15.210 00:03:15.210 real 0m0.744s 00:03:15.210 user 0m0.235s 00:03:15.210 sys 0m0.510s 00:03:15.210 13:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.210 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:03:15.210 ************************************ 00:03:15.210 END TEST accel_missing_filename 00:03:15.210 ************************************ 00:03:15.468 13:26:54 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:15.468 13:26:54 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:03:15.468 13:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.468 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:03:15.468 ************************************ 00:03:15.468 START TEST accel_compress_verify 00:03:15.468 ************************************ 00:03:15.468 13:26:54 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:15.468 13:26:54 -- common/autotest_common.sh@640 -- # local es=0 00:03:15.468 13:26:54 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:15.468 13:26:54 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:15.468 13:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:15.468 13:26:54 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:15.468 13:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:15.468 13:26:54 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:15.468 13:26:54 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0ByuLT -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:15.468 [2024-07-10 13:26:54.586285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:15.468 [2024-07-10 13:26:54.586507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:15.727 EAL: TSC is not safe to use in SMP mode 00:03:15.727 EAL: TSC is not invariant 00:03:15.727 [2024-07-10 13:26:55.058234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.985 [2024-07-10 13:26:55.165697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.985 13:26:55 -- accel/accel.sh@12 -- # build_accel_config 00:03:15.985 13:26:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:15.985 13:26:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:15.985 13:26:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:15.985 13:26:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:15.985 13:26:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:15.985 13:26:55 -- accel/accel.sh@41 -- # local IFS=, 00:03:15.985 13:26:55 -- accel/accel.sh@42 -- # jq -r . 00:03:15.985 [2024-07-10 13:26:55.175540] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:15.985 [2024-07-10 13:26:55.208817] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:03:15.985 00:03:15.985 Compression does not support the verify option, aborting. 00:03:15.985 13:26:55 -- common/autotest_common.sh@643 -- # es=211 00:03:15.985 13:26:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:15.985 13:26:55 -- common/autotest_common.sh@652 -- # es=83 00:03:15.985 13:26:55 -- common/autotest_common.sh@653 -- # case "$es" in 00:03:15.985 13:26:55 -- common/autotest_common.sh@660 -- # es=1 00:03:15.985 13:26:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:15.985 00:03:15.985 real 0m0.737s 00:03:15.985 user 0m0.213s 00:03:15.985 sys 0m0.526s 00:03:15.985 13:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.985 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:15.985 ************************************ 00:03:15.985 END TEST accel_compress_verify 00:03:15.985 ************************************ 00:03:15.985 13:26:55 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:03:15.985 13:26:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:15.985 13:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.985 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:15.985 ************************************ 00:03:15.985 START TEST accel_wrong_workload 00:03:15.985 ************************************ 00:03:15.985 13:26:55 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:03:15.985 13:26:55 -- common/autotest_common.sh@640 -- # local es=0 00:03:15.985 13:26:55 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:03:16.244 13:26:55 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:16.244 13:26:55 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:03:16.244 13:26:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.BWppIY -t 1 -w foobar 00:03:16.244 Unsupported workload type: foobar 00:03:16.244 [2024-07-10 13:26:55.352100] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:03:16.244 accel_perf options: 00:03:16.244 [-h help message] 00:03:16.244 [-q queue depth per core] 00:03:16.244 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:03:16.244 [-T number of threads per core 00:03:16.244 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:03:16.244 [-t time in seconds] 00:03:16.244 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:03:16.244 [ dif_verify, , dif_generate, dif_generate_copy 00:03:16.244 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:03:16.244 [-l for compress/decompress workloads, name of uncompressed input file 00:03:16.244 [-S for crc32c workload, use this seed value (default 0) 00:03:16.244 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:03:16.244 [-f for fill workload, use this BYTE value (default 255) 00:03:16.244 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:03:16.244 [-y verify result if this switch is on] 00:03:16.244 [-a tasks to allocate per core (default: same value as -q)] 00:03:16.244 Can be used to spread operations across a wider range of memory. 00:03:16.244 13:26:55 -- common/autotest_common.sh@643 -- # es=1 00:03:16.244 13:26:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:16.244 13:26:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:03:16.244 13:26:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:16.244 00:03:16.244 real 0m0.008s 00:03:16.244 user 0m0.004s 00:03:16.244 sys 0m0.004s 00:03:16.244 13:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.244 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:16.244 ************************************ 00:03:16.244 END TEST accel_wrong_workload 00:03:16.244 ************************************ 00:03:16.244 13:26:55 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:03:16.244 13:26:55 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:03:16.244 13:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.244 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:16.244 ************************************ 00:03:16.244 START TEST accel_negative_buffers 00:03:16.244 ************************************ 00:03:16.244 13:26:55 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:03:16.244 13:26:55 -- common/autotest_common.sh@640 -- # local es=0 00:03:16.244 13:26:55 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:03:16.244 13:26:55 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:16.244 13:26:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:16.244 13:26:55 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:03:16.244 13:26:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.poNFDb -t 1 -w xor -y -x -1 00:03:16.244 -x option must be non-negative. 00:03:16.244 [2024-07-10 13:26:55.398183] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:03:16.244 accel_perf options: 00:03:16.244 [-h help message] 00:03:16.244 [-q queue depth per core] 00:03:16.244 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:03:16.244 [-T number of threads per core 00:03:16.244 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:03:16.244 [-t time in seconds] 00:03:16.244 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:03:16.244 [ dif_verify, , dif_generate, dif_generate_copy 00:03:16.244 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:03:16.244 [-l for compress/decompress workloads, name of uncompressed input file 00:03:16.244 [-S for crc32c workload, use this seed value (default 0) 00:03:16.244 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:03:16.244 [-f for fill workload, use this BYTE value (default 255) 00:03:16.244 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:03:16.244 [-y verify result if this switch is on] 00:03:16.244 [-a tasks to allocate per core (default: same value as -q)] 00:03:16.244 Can be used to spread operations across a wider range of memory. 00:03:16.244 13:26:55 -- common/autotest_common.sh@643 -- # es=1 00:03:16.244 13:26:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:16.244 13:26:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:03:16.244 13:26:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:16.244 00:03:16.244 real 0m0.011s 00:03:16.244 user 0m0.006s 00:03:16.244 sys 0m0.007s 00:03:16.244 13:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.244 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:16.244 ************************************ 00:03:16.244 END TEST accel_negative_buffers 00:03:16.244 ************************************ 00:03:16.244 13:26:55 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:03:16.244 13:26:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:16.244 13:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.244 13:26:55 -- common/autotest_common.sh@10 -- # set +x 00:03:16.244 ************************************ 00:03:16.244 START TEST accel_crc32c 00:03:16.244 ************************************ 00:03:16.244 13:26:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:03:16.244 13:26:55 -- accel/accel.sh@16 -- # local accel_opc 00:03:16.244 13:26:55 -- accel/accel.sh@17 -- # local accel_module 00:03:16.244 13:26:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:03:16.244 13:26:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.QCS9e3 -t 1 -w crc32c -S 32 -y 00:03:16.245 [2024-07-10 13:26:55.444901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:16.245 [2024-07-10 13:26:55.445118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:16.810 EAL: TSC is not safe to use in SMP mode 00:03:16.810 EAL: TSC is not invariant 00:03:16.810 [2024-07-10 13:26:55.972590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.810 [2024-07-10 13:26:56.077619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.810 13:26:56 -- accel/accel.sh@12 -- # build_accel_config 00:03:16.810 13:26:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:16.810 13:26:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:16.810 13:26:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:16.810 13:26:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:16.810 13:26:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:16.810 13:26:56 -- accel/accel.sh@41 -- # local IFS=, 00:03:16.810 13:26:56 -- accel/accel.sh@42 -- # jq -r . 00:03:18.185 13:26:57 -- accel/accel.sh@18 -- # out=' 00:03:18.185 SPDK Configuration: 00:03:18.185 Core mask: 0x1 00:03:18.185 00:03:18.185 Accel Perf Configuration: 00:03:18.185 Workload Type: crc32c 00:03:18.185 CRC-32C seed: 32 00:03:18.185 Transfer size: 4096 bytes 00:03:18.185 Vector count 1 00:03:18.185 Module: software 00:03:18.185 Queue depth: 32 00:03:18.185 Allocate depth: 32 00:03:18.185 # threads/core: 1 00:03:18.185 Run time: 1 seconds 00:03:18.185 Verify: Yes 00:03:18.185 00:03:18.185 Running for 1 seconds... 00:03:18.185 00:03:18.185 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:18.185 ------------------------------------------------------------------------------------ 00:03:18.185 0,0 2021408/s 7896 MiB/s 0 0 00:03:18.185 ==================================================================================== 00:03:18.185 Total 2021408/s 7896 MiB/s 0 0' 00:03:18.185 13:26:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:03:18.185 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.185 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.185 13:26:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NmwAg1 -t 1 -w crc32c -S 32 -y 00:03:18.185 [2024-07-10 13:26:57.226527] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:18.185 [2024-07-10 13:26:57.226747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:18.444 EAL: TSC is not safe to use in SMP mode 00:03:18.444 EAL: TSC is not invariant 00:03:18.444 [2024-07-10 13:26:57.706368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.444 [2024-07-10 13:26:57.800842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.444 13:26:57 -- accel/accel.sh@12 -- # build_accel_config 00:03:18.444 13:26:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:18.444 13:26:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:18.444 13:26:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:18.444 13:26:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:18.444 13:26:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:18.444 13:26:57 -- accel/accel.sh@41 -- # local IFS=, 00:03:18.703 13:26:57 -- accel/accel.sh@42 -- # jq -r . 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=0x1 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=crc32c 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=32 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=software 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@23 -- # accel_module=software 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=32 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=32 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=1 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.703 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.703 13:26:57 -- accel/accel.sh@21 -- # val=Yes 00:03:18.703 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.704 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.704 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:18.704 13:26:57 -- accel/accel.sh@21 -- # val= 00:03:18.704 13:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # IFS=: 00:03:18.704 13:26:57 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@21 -- # val= 00:03:19.639 13:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # IFS=: 00:03:19.639 13:26:58 -- accel/accel.sh@20 -- # read -r var val 00:03:19.639 13:26:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:19.639 13:26:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:03:19.639 13:26:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:19.639 00:03:19.639 real 0m3.516s 00:03:19.639 user 0m2.435s 00:03:19.639 sys 0m1.098s 00:03:19.639 13:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.639 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:19.639 ************************************ 00:03:19.639 END TEST accel_crc32c 00:03:19.639 ************************************ 00:03:19.639 13:26:58 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:03:19.639 13:26:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:19.639 13:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.639 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:19.897 ************************************ 00:03:19.897 START TEST accel_crc32c_C2 00:03:19.897 ************************************ 00:03:19.897 13:26:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:03:19.897 13:26:59 -- accel/accel.sh@16 -- # local accel_opc 00:03:19.897 13:26:59 -- accel/accel.sh@17 -- # local accel_module 00:03:19.897 13:26:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:03:19.897 13:26:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.XgrSb2 -t 1 -w crc32c -y -C 2 00:03:19.897 [2024-07-10 13:26:59.020641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:19.897 [2024-07-10 13:26:59.020983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:20.154 EAL: TSC is not safe to use in SMP mode 00:03:20.154 EAL: TSC is not invariant 00:03:20.154 [2024-07-10 13:26:59.474661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.413 [2024-07-10 13:26:59.561032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:20.413 13:26:59 -- accel/accel.sh@12 -- # build_accel_config 00:03:20.413 13:26:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:20.413 13:26:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:20.413 13:26:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:20.413 13:26:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:20.413 13:26:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:20.413 13:26:59 -- accel/accel.sh@41 -- # local IFS=, 00:03:20.413 13:26:59 -- accel/accel.sh@42 -- # jq -r . 00:03:21.816 13:27:00 -- accel/accel.sh@18 -- # out=' 00:03:21.816 SPDK Configuration: 00:03:21.816 Core mask: 0x1 00:03:21.816 00:03:21.816 Accel Perf Configuration: 00:03:21.816 Workload Type: crc32c 00:03:21.816 CRC-32C seed: 0 00:03:21.816 Transfer size: 4096 bytes 00:03:21.816 Vector count 2 00:03:21.816 Module: software 00:03:21.816 Queue depth: 32 00:03:21.816 Allocate depth: 32 00:03:21.816 # threads/core: 1 00:03:21.816 Run time: 1 seconds 00:03:21.816 Verify: Yes 00:03:21.816 00:03:21.816 Running for 1 seconds... 00:03:21.816 00:03:21.816 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:21.816 ------------------------------------------------------------------------------------ 00:03:21.816 0,0 1104864/s 8631 MiB/s 0 0 00:03:21.816 ==================================================================================== 00:03:21.816 Total 1104864/s 4315 MiB/s 0 0' 00:03:21.816 13:27:00 -- accel/accel.sh@20 -- # IFS=: 00:03:21.816 13:27:00 -- accel/accel.sh@20 -- # read -r var val 00:03:21.816 13:27:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:03:21.816 13:27:00 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pOz639 -t 1 -w crc32c -y -C 2 00:03:21.816 [2024-07-10 13:27:00.735857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:21.816 [2024-07-10 13:27:00.736084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:22.075 EAL: TSC is not safe to use in SMP mode 00:03:22.075 EAL: TSC is not invariant 00:03:22.075 [2024-07-10 13:27:01.238074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.075 [2024-07-10 13:27:01.329147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:22.075 13:27:01 -- accel/accel.sh@12 -- # build_accel_config 00:03:22.075 13:27:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:22.075 13:27:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:22.075 13:27:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:22.075 13:27:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:22.075 13:27:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:22.075 13:27:01 -- accel/accel.sh@41 -- # local IFS=, 00:03:22.075 13:27:01 -- accel/accel.sh@42 -- # jq -r . 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=0x1 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=crc32c 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=0 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=software 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@23 -- # accel_module=software 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=32 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=32 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=1 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val=Yes 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:22.075 13:27:01 -- accel/accel.sh@21 -- # val= 00:03:22.075 13:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # IFS=: 00:03:22.075 13:27:01 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@21 -- # val= 00:03:23.451 13:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # IFS=: 00:03:23.451 13:27:02 -- accel/accel.sh@20 -- # read -r var val 00:03:23.451 13:27:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:23.451 13:27:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:03:23.451 13:27:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:23.451 00:03:23.451 real 0m3.466s 00:03:23.451 user 0m2.411s 00:03:23.451 sys 0m1.063s 00:03:23.451 13:27:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.451 13:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.451 ************************************ 00:03:23.451 END TEST accel_crc32c_C2 00:03:23.451 ************************************ 00:03:23.451 13:27:02 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:03:23.451 13:27:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:23.451 13:27:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.451 13:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.451 ************************************ 00:03:23.451 START TEST accel_copy 00:03:23.451 ************************************ 00:03:23.451 13:27:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:03:23.451 13:27:02 -- accel/accel.sh@16 -- # local accel_opc 00:03:23.451 13:27:02 -- accel/accel.sh@17 -- # local accel_module 00:03:23.451 13:27:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:03:23.451 13:27:02 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.YCjoyx -t 1 -w copy -y 00:03:23.451 [2024-07-10 13:27:02.508393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:23.451 [2024-07-10 13:27:02.508592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:23.709 EAL: TSC is not safe to use in SMP mode 00:03:23.709 EAL: TSC is not invariant 00:03:23.709 [2024-07-10 13:27:03.000392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.968 [2024-07-10 13:27:03.102718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.968 13:27:03 -- accel/accel.sh@12 -- # build_accel_config 00:03:23.968 13:27:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:23.968 13:27:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:23.968 13:27:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:23.968 13:27:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:23.968 13:27:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:23.968 13:27:03 -- accel/accel.sh@41 -- # local IFS=, 00:03:23.968 13:27:03 -- accel/accel.sh@42 -- # jq -r . 00:03:24.902 13:27:04 -- accel/accel.sh@18 -- # out=' 00:03:24.902 SPDK Configuration: 00:03:24.902 Core mask: 0x1 00:03:24.902 00:03:24.902 Accel Perf Configuration: 00:03:24.902 Workload Type: copy 00:03:24.902 Transfer size: 4096 bytes 00:03:24.902 Vector count 1 00:03:24.902 Module: software 00:03:24.902 Queue depth: 32 00:03:24.902 Allocate depth: 32 00:03:24.902 # threads/core: 1 00:03:24.902 Run time: 1 seconds 00:03:24.902 Verify: Yes 00:03:24.902 00:03:24.902 Running for 1 seconds... 00:03:24.902 00:03:24.902 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:24.902 ------------------------------------------------------------------------------------ 00:03:24.902 0,0 1777024/s 6941 MiB/s 0 0 00:03:24.902 ==================================================================================== 00:03:24.902 Total 1777024/s 6941 MiB/s 0 0' 00:03:24.902 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:24.902 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:24.902 13:27:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:03:24.902 13:27:04 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.5a8XK7 -t 1 -w copy -y 00:03:24.902 [2024-07-10 13:27:04.253611] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:24.902 [2024-07-10 13:27:04.253838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:25.469 EAL: TSC is not safe to use in SMP mode 00:03:25.469 EAL: TSC is not invariant 00:03:25.469 [2024-07-10 13:27:04.762981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.729 [2024-07-10 13:27:04.868246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.729 13:27:04 -- accel/accel.sh@12 -- # build_accel_config 00:03:25.729 13:27:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:25.729 13:27:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:25.729 13:27:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:25.729 13:27:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:25.729 13:27:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:25.729 13:27:04 -- accel/accel.sh@41 -- # local IFS=, 00:03:25.729 13:27:04 -- accel/accel.sh@42 -- # jq -r . 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=0x1 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=copy 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@24 -- # accel_opc=copy 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=software 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@23 -- # accel_module=software 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=32 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=32 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=1 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val=Yes 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:25.729 13:27:04 -- accel/accel.sh@21 -- # val= 00:03:25.729 13:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # IFS=: 00:03:25.729 13:27:04 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@21 -- # val= 00:03:26.686 13:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # IFS=: 00:03:26.686 13:27:06 -- accel/accel.sh@20 -- # read -r var val 00:03:26.686 13:27:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:26.686 13:27:06 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:03:26.686 13:27:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:26.686 00:03:26.686 real 0m3.512s 00:03:26.686 user 0m2.441s 00:03:26.686 sys 0m1.088s 00:03:26.686 13:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.686 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:26.686 ************************************ 00:03:26.686 END TEST accel_copy 00:03:26.686 ************************************ 00:03:26.944 13:27:06 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:26.944 13:27:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:03:26.944 13:27:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.944 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:26.944 ************************************ 00:03:26.944 START TEST accel_fill 00:03:26.944 ************************************ 00:03:26.944 13:27:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:26.944 13:27:06 -- accel/accel.sh@16 -- # local accel_opc 00:03:26.944 13:27:06 -- accel/accel.sh@17 -- # local accel_module 00:03:26.944 13:27:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:26.944 13:27:06 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zdmMUL -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:26.944 [2024-07-10 13:27:06.076692] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:26.944 [2024-07-10 13:27:06.077099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:27.510 EAL: TSC is not safe to use in SMP mode 00:03:27.510 EAL: TSC is not invariant 00:03:27.510 [2024-07-10 13:27:06.581120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.510 [2024-07-10 13:27:06.687294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.510 13:27:06 -- accel/accel.sh@12 -- # build_accel_config 00:03:27.510 13:27:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:27.510 13:27:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:27.510 13:27:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:27.510 13:27:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:27.510 13:27:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:27.510 13:27:06 -- accel/accel.sh@41 -- # local IFS=, 00:03:27.510 13:27:06 -- accel/accel.sh@42 -- # jq -r . 00:03:28.885 13:27:07 -- accel/accel.sh@18 -- # out=' 00:03:28.885 SPDK Configuration: 00:03:28.885 Core mask: 0x1 00:03:28.885 00:03:28.885 Accel Perf Configuration: 00:03:28.885 Workload Type: fill 00:03:28.885 Fill pattern: 0x80 00:03:28.885 Transfer size: 4096 bytes 00:03:28.885 Vector count 1 00:03:28.885 Module: software 00:03:28.885 Queue depth: 64 00:03:28.885 Allocate depth: 64 00:03:28.885 # threads/core: 1 00:03:28.885 Run time: 1 seconds 00:03:28.885 Verify: Yes 00:03:28.885 00:03:28.885 Running for 1 seconds... 00:03:28.885 00:03:28.885 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:28.885 ------------------------------------------------------------------------------------ 00:03:28.885 0,0 2320640/s 9065 MiB/s 0 0 00:03:28.885 ==================================================================================== 00:03:28.885 Total 2320640/s 9065 MiB/s 0 0' 00:03:28.885 13:27:07 -- accel/accel.sh@20 -- # IFS=: 00:03:28.885 13:27:07 -- accel/accel.sh@20 -- # read -r var val 00:03:28.885 13:27:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:28.885 13:27:07 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VCGeRP -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:28.885 [2024-07-10 13:27:07.840432] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:28.885 [2024-07-10 13:27:07.840669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:29.143 EAL: TSC is not safe to use in SMP mode 00:03:29.144 EAL: TSC is not invariant 00:03:29.144 [2024-07-10 13:27:08.324628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.144 [2024-07-10 13:27:08.425850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.144 13:27:08 -- accel/accel.sh@12 -- # build_accel_config 00:03:29.144 13:27:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:29.144 13:27:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:29.144 13:27:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:29.144 13:27:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:29.144 13:27:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:29.144 13:27:08 -- accel/accel.sh@41 -- # local IFS=, 00:03:29.144 13:27:08 -- accel/accel.sh@42 -- # jq -r . 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=0x1 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=fill 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@24 -- # accel_opc=fill 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=0x80 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=software 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@23 -- # accel_module=software 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=64 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=64 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=1 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val=Yes 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:29.144 13:27:08 -- accel/accel.sh@21 -- # val= 00:03:29.144 13:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # IFS=: 00:03:29.144 13:27:08 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@21 -- # val= 00:03:30.518 13:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # IFS=: 00:03:30.518 13:27:09 -- accel/accel.sh@20 -- # read -r var val 00:03:30.518 13:27:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:30.518 13:27:09 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:03:30.518 13:27:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:30.518 00:03:30.518 real 0m3.507s 00:03:30.518 user 0m2.453s 00:03:30.518 sys 0m1.065s 00:03:30.518 13:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.518 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:03:30.518 ************************************ 00:03:30.518 END TEST accel_fill 00:03:30.518 ************************************ 00:03:30.518 13:27:09 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:03:30.518 13:27:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:30.518 13:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.518 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:03:30.518 ************************************ 00:03:30.518 START TEST accel_copy_crc32c 00:03:30.518 ************************************ 00:03:30.518 13:27:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:03:30.518 13:27:09 -- accel/accel.sh@16 -- # local accel_opc 00:03:30.518 13:27:09 -- accel/accel.sh@17 -- # local accel_module 00:03:30.518 13:27:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:03:30.518 13:27:09 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.oQpZp6 -t 1 -w copy_crc32c -y 00:03:30.518 [2024-07-10 13:27:09.612188] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:30.518 [2024-07-10 13:27:09.612404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:30.776 EAL: TSC is not safe to use in SMP mode 00:03:30.776 EAL: TSC is not invariant 00:03:30.776 [2024-07-10 13:27:10.086402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.034 [2024-07-10 13:27:10.182003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.034 13:27:10 -- accel/accel.sh@12 -- # build_accel_config 00:03:31.034 13:27:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:31.034 13:27:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:31.034 13:27:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:31.034 13:27:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:31.034 13:27:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:31.034 13:27:10 -- accel/accel.sh@41 -- # local IFS=, 00:03:31.034 13:27:10 -- accel/accel.sh@42 -- # jq -r . 00:03:31.968 13:27:11 -- accel/accel.sh@18 -- # out=' 00:03:31.968 SPDK Configuration: 00:03:31.968 Core mask: 0x1 00:03:31.968 00:03:31.968 Accel Perf Configuration: 00:03:31.968 Workload Type: copy_crc32c 00:03:31.968 CRC-32C seed: 0 00:03:31.968 Vector size: 4096 bytes 00:03:31.968 Transfer size: 4096 bytes 00:03:31.968 Vector count 1 00:03:31.968 Module: software 00:03:31.968 Queue depth: 32 00:03:31.968 Allocate depth: 32 00:03:31.968 # threads/core: 1 00:03:31.968 Run time: 1 seconds 00:03:31.968 Verify: Yes 00:03:31.968 00:03:31.968 Running for 1 seconds... 00:03:31.968 00:03:31.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:31.968 ------------------------------------------------------------------------------------ 00:03:31.968 0,0 1107680/s 4326 MiB/s 0 0 00:03:31.968 ==================================================================================== 00:03:31.968 Total 1107680/s 4326 MiB/s 0 0' 00:03:31.968 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:31.968 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:31.968 13:27:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:03:31.968 13:27:11 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7Xzl2q -t 1 -w copy_crc32c -y 00:03:32.228 [2024-07-10 13:27:11.328744] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:32.228 [2024-07-10 13:27:11.328922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:32.488 EAL: TSC is not safe to use in SMP mode 00:03:32.488 EAL: TSC is not invariant 00:03:32.488 [2024-07-10 13:27:11.802945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.748 [2024-07-10 13:27:11.897571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.748 13:27:11 -- accel/accel.sh@12 -- # build_accel_config 00:03:32.748 13:27:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:32.748 13:27:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:32.748 13:27:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:32.748 13:27:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:32.748 13:27:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:32.748 13:27:11 -- accel/accel.sh@41 -- # local IFS=, 00:03:32.748 13:27:11 -- accel/accel.sh@42 -- # jq -r . 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=0x1 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=copy_crc32c 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=0 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=software 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@23 -- # accel_module=software 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=32 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=32 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=1 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val=Yes 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:32.748 13:27:11 -- accel/accel.sh@21 -- # val= 00:03:32.748 13:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # IFS=: 00:03:32.748 13:27:11 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@21 -- # val= 00:03:33.687 13:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # IFS=: 00:03:33.687 13:27:13 -- accel/accel.sh@20 -- # read -r var val 00:03:33.687 13:27:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:33.687 13:27:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:03:33.687 13:27:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:33.687 00:03:33.687 real 0m3.433s 00:03:33.687 user 0m2.392s 00:03:33.687 sys 0m1.050s 00:03:33.687 13:27:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.687 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:03:33.687 ************************************ 00:03:33.687 END TEST accel_copy_crc32c 00:03:33.687 ************************************ 00:03:33.945 13:27:13 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:03:33.945 13:27:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:33.945 13:27:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.945 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:03:33.945 ************************************ 00:03:33.945 START TEST accel_copy_crc32c_C2 00:03:33.945 ************************************ 00:03:33.945 13:27:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:03:33.945 13:27:13 -- accel/accel.sh@16 -- # local accel_opc 00:03:33.945 13:27:13 -- accel/accel.sh@17 -- # local accel_module 00:03:33.945 13:27:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:03:33.945 13:27:13 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PRIsMN -t 1 -w copy_crc32c -y -C 2 00:03:33.945 [2024-07-10 13:27:13.091158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:33.945 [2024-07-10 13:27:13.091706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:34.204 EAL: TSC is not safe to use in SMP mode 00:03:34.204 EAL: TSC is not invariant 00:03:34.462 [2024-07-10 13:27:13.563704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.462 [2024-07-10 13:27:13.671157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.462 13:27:13 -- accel/accel.sh@12 -- # build_accel_config 00:03:34.462 13:27:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:34.462 13:27:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:34.462 13:27:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:34.462 13:27:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:34.462 13:27:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:34.462 13:27:13 -- accel/accel.sh@41 -- # local IFS=, 00:03:34.462 13:27:13 -- accel/accel.sh@42 -- # jq -r . 00:03:35.842 13:27:14 -- accel/accel.sh@18 -- # out=' 00:03:35.842 SPDK Configuration: 00:03:35.842 Core mask: 0x1 00:03:35.842 00:03:35.842 Accel Perf Configuration: 00:03:35.842 Workload Type: copy_crc32c 00:03:35.842 CRC-32C seed: 0 00:03:35.842 Vector size: 4096 bytes 00:03:35.842 Transfer size: 8192 bytes 00:03:35.842 Vector count 2 00:03:35.842 Module: software 00:03:35.842 Queue depth: 32 00:03:35.842 Allocate depth: 32 00:03:35.842 # threads/core: 1 00:03:35.842 Run time: 1 seconds 00:03:35.842 Verify: Yes 00:03:35.842 00:03:35.842 Running for 1 seconds... 00:03:35.842 00:03:35.842 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:35.842 ------------------------------------------------------------------------------------ 00:03:35.842 0,0 601280/s 4697 MiB/s 0 0 00:03:35.842 ==================================================================================== 00:03:35.842 Total 601280/s 2348 MiB/s 0 0' 00:03:35.842 13:27:14 -- accel/accel.sh@20 -- # IFS=: 00:03:35.842 13:27:14 -- accel/accel.sh@20 -- # read -r var val 00:03:35.842 13:27:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:03:35.842 13:27:14 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.gUY4U6 -t 1 -w copy_crc32c -y -C 2 00:03:35.842 [2024-07-10 13:27:14.819204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:35.842 [2024-07-10 13:27:14.819433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:36.100 EAL: TSC is not safe to use in SMP mode 00:03:36.100 EAL: TSC is not invariant 00:03:36.100 [2024-07-10 13:27:15.288414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.100 [2024-07-10 13:27:15.390275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.100 13:27:15 -- accel/accel.sh@12 -- # build_accel_config 00:03:36.100 13:27:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:36.100 13:27:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:36.100 13:27:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:36.100 13:27:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:36.100 13:27:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:36.100 13:27:15 -- accel/accel.sh@41 -- # local IFS=, 00:03:36.100 13:27:15 -- accel/accel.sh@42 -- # jq -r . 00:03:36.100 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.100 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.100 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.100 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.100 13:27:15 -- accel/accel.sh@21 -- # val=0x1 00:03:36.100 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.100 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.100 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.100 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=0 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val='8192 bytes' 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=software 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@23 -- # accel_module=software 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=32 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=32 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=1 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val=Yes 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:36.101 13:27:15 -- accel/accel.sh@21 -- # val= 00:03:36.101 13:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # IFS=: 00:03:36.101 13:27:15 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@21 -- # val= 00:03:37.477 13:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # IFS=: 00:03:37.477 13:27:16 -- accel/accel.sh@20 -- # read -r var val 00:03:37.477 13:27:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:37.477 13:27:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:03:37.477 13:27:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:37.477 00:03:37.477 real 0m3.464s 00:03:37.477 user 0m2.414s 00:03:37.477 sys 0m1.061s 00:03:37.477 13:27:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.477 13:27:16 -- common/autotest_common.sh@10 -- # set +x 00:03:37.477 ************************************ 00:03:37.477 END TEST accel_copy_crc32c_C2 00:03:37.477 ************************************ 00:03:37.477 13:27:16 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:03:37.477 13:27:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:37.477 13:27:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.477 13:27:16 -- common/autotest_common.sh@10 -- # set +x 00:03:37.477 ************************************ 00:03:37.477 START TEST accel_dualcast 00:03:37.477 ************************************ 00:03:37.477 13:27:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:03:37.477 13:27:16 -- accel/accel.sh@16 -- # local accel_opc 00:03:37.477 13:27:16 -- accel/accel.sh@17 -- # local accel_module 00:03:37.477 13:27:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:03:37.477 13:27:16 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.efihrs -t 1 -w dualcast -y 00:03:37.477 [2024-07-10 13:27:16.591333] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:37.477 [2024-07-10 13:27:16.591554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:37.735 EAL: TSC is not safe to use in SMP mode 00:03:37.735 EAL: TSC is not invariant 00:03:37.735 [2024-07-10 13:27:17.059174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.994 [2024-07-10 13:27:17.164327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.994 13:27:17 -- accel/accel.sh@12 -- # build_accel_config 00:03:37.994 13:27:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:37.994 13:27:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:37.994 13:27:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:37.994 13:27:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:37.994 13:27:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:37.994 13:27:17 -- accel/accel.sh@41 -- # local IFS=, 00:03:37.994 13:27:17 -- accel/accel.sh@42 -- # jq -r . 00:03:39.371 13:27:18 -- accel/accel.sh@18 -- # out=' 00:03:39.371 SPDK Configuration: 00:03:39.371 Core mask: 0x1 00:03:39.371 00:03:39.371 Accel Perf Configuration: 00:03:39.371 Workload Type: dualcast 00:03:39.371 Transfer size: 4096 bytes 00:03:39.371 Vector count 1 00:03:39.371 Module: software 00:03:39.371 Queue depth: 32 00:03:39.371 Allocate depth: 32 00:03:39.371 # threads/core: 1 00:03:39.371 Run time: 1 seconds 00:03:39.371 Verify: Yes 00:03:39.371 00:03:39.371 Running for 1 seconds... 00:03:39.371 00:03:39.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:39.371 ------------------------------------------------------------------------------------ 00:03:39.371 0,0 1393696/s 5444 MiB/s 0 0 00:03:39.371 ==================================================================================== 00:03:39.371 Total 1393696/s 5444 MiB/s 0 0' 00:03:39.371 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.371 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.371 13:27:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:03:39.371 13:27:18 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uXCT4d -t 1 -w dualcast -y 00:03:39.371 [2024-07-10 13:27:18.320425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:39.371 [2024-07-10 13:27:18.320838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:39.631 EAL: TSC is not safe to use in SMP mode 00:03:39.631 EAL: TSC is not invariant 00:03:39.631 [2024-07-10 13:27:18.784248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.631 [2024-07-10 13:27:18.879567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.631 13:27:18 -- accel/accel.sh@12 -- # build_accel_config 00:03:39.631 13:27:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:39.631 13:27:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:39.631 13:27:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:39.631 13:27:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:39.631 13:27:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:39.632 13:27:18 -- accel/accel.sh@41 -- # local IFS=, 00:03:39.632 13:27:18 -- accel/accel.sh@42 -- # jq -r . 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=0x1 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=dualcast 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=software 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@23 -- # accel_module=software 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=32 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=32 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=1 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val=Yes 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:39.632 13:27:18 -- accel/accel.sh@21 -- # val= 00:03:39.632 13:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # IFS=: 00:03:39.632 13:27:18 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@21 -- # val= 00:03:41.011 13:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # IFS=: 00:03:41.011 13:27:20 -- accel/accel.sh@20 -- # read -r var val 00:03:41.011 13:27:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:41.011 13:27:20 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:03:41.011 13:27:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:41.011 00:03:41.011 real 0m3.444s 00:03:41.011 user 0m2.451s 00:03:41.011 sys 0m1.010s 00:03:41.011 13:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.011 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.011 ************************************ 00:03:41.011 END TEST accel_dualcast 00:03:41.011 ************************************ 00:03:41.011 13:27:20 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:03:41.011 13:27:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:41.011 13:27:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.011 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.011 ************************************ 00:03:41.011 START TEST accel_compare 00:03:41.011 ************************************ 00:03:41.011 13:27:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:03:41.011 13:27:20 -- accel/accel.sh@16 -- # local accel_opc 00:03:41.011 13:27:20 -- accel/accel.sh@17 -- # local accel_module 00:03:41.011 13:27:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:03:41.011 13:27:20 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ISpb0T -t 1 -w compare -y 00:03:41.011 [2024-07-10 13:27:20.090955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:41.011 [2024-07-10 13:27:20.091344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:41.276 EAL: TSC is not safe to use in SMP mode 00:03:41.276 EAL: TSC is not invariant 00:03:41.276 [2024-07-10 13:27:20.546835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.569 [2024-07-10 13:27:20.636696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.569 13:27:20 -- accel/accel.sh@12 -- # build_accel_config 00:03:41.569 13:27:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:41.569 13:27:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:41.569 13:27:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:41.569 13:27:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:41.569 13:27:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:41.569 13:27:20 -- accel/accel.sh@41 -- # local IFS=, 00:03:41.569 13:27:20 -- accel/accel.sh@42 -- # jq -r . 00:03:42.507 13:27:21 -- accel/accel.sh@18 -- # out=' 00:03:42.507 SPDK Configuration: 00:03:42.507 Core mask: 0x1 00:03:42.507 00:03:42.507 Accel Perf Configuration: 00:03:42.507 Workload Type: compare 00:03:42.507 Transfer size: 4096 bytes 00:03:42.507 Vector count 1 00:03:42.507 Module: software 00:03:42.507 Queue depth: 32 00:03:42.507 Allocate depth: 32 00:03:42.507 # threads/core: 1 00:03:42.507 Run time: 1 seconds 00:03:42.507 Verify: Yes 00:03:42.507 00:03:42.507 Running for 1 seconds... 00:03:42.507 00:03:42.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:42.507 ------------------------------------------------------------------------------------ 00:03:42.507 0,0 2905728/s 11350 MiB/s 0 0 00:03:42.507 ==================================================================================== 00:03:42.508 Total 2905728/s 11350 MiB/s 0 0' 00:03:42.508 13:27:21 -- accel/accel.sh@20 -- # IFS=: 00:03:42.508 13:27:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:03:42.508 13:27:21 -- accel/accel.sh@20 -- # read -r var val 00:03:42.508 13:27:21 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nRspUZ -t 1 -w compare -y 00:03:42.508 [2024-07-10 13:27:21.788356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:42.508 [2024-07-10 13:27:21.788707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:43.077 EAL: TSC is not safe to use in SMP mode 00:03:43.077 EAL: TSC is not invariant 00:03:43.077 [2024-07-10 13:27:22.226150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.077 [2024-07-10 13:27:22.317199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.077 13:27:22 -- accel/accel.sh@12 -- # build_accel_config 00:03:43.077 13:27:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:43.077 13:27:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:43.077 13:27:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:43.077 13:27:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:43.077 13:27:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:43.077 13:27:22 -- accel/accel.sh@41 -- # local IFS=, 00:03:43.077 13:27:22 -- accel/accel.sh@42 -- # jq -r . 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val=0x1 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val=compare 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@24 -- # accel_opc=compare 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.077 13:27:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:43.077 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.077 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val=software 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@23 -- # accel_module=software 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val=32 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val=32 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val=1 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val=Yes 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:43.078 13:27:22 -- accel/accel.sh@21 -- # val= 00:03:43.078 13:27:22 -- accel/accel.sh@22 -- # case "$var" in 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # IFS=: 00:03:43.078 13:27:22 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@21 -- # val= 00:03:44.457 13:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # IFS=: 00:03:44.457 13:27:23 -- accel/accel.sh@20 -- # read -r var val 00:03:44.457 13:27:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:44.457 13:27:23 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:03:44.457 13:27:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:44.457 00:03:44.457 real 0m3.384s 00:03:44.457 user 0m2.392s 00:03:44.457 sys 0m1.005s 00:03:44.457 13:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.457 13:27:23 -- common/autotest_common.sh@10 -- # set +x 00:03:44.457 ************************************ 00:03:44.457 END TEST accel_compare 00:03:44.457 ************************************ 00:03:44.457 13:27:23 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:03:44.457 13:27:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:44.457 13:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.457 13:27:23 -- common/autotest_common.sh@10 -- # set +x 00:03:44.457 ************************************ 00:03:44.457 START TEST accel_xor 00:03:44.457 ************************************ 00:03:44.457 13:27:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:03:44.457 13:27:23 -- accel/accel.sh@16 -- # local accel_opc 00:03:44.457 13:27:23 -- accel/accel.sh@17 -- # local accel_module 00:03:44.457 13:27:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:03:44.457 13:27:23 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iwextU -t 1 -w xor -y 00:03:44.457 [2024-07-10 13:27:23.523363] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:44.457 [2024-07-10 13:27:23.523693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:44.716 EAL: TSC is not safe to use in SMP mode 00:03:44.716 EAL: TSC is not invariant 00:03:44.716 [2024-07-10 13:27:23.954636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.716 [2024-07-10 13:27:24.031287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.716 13:27:24 -- accel/accel.sh@12 -- # build_accel_config 00:03:44.716 13:27:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:44.716 13:27:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:44.716 13:27:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:44.716 13:27:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:44.716 13:27:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:44.716 13:27:24 -- accel/accel.sh@41 -- # local IFS=, 00:03:44.716 13:27:24 -- accel/accel.sh@42 -- # jq -r . 00:03:46.094 13:27:25 -- accel/accel.sh@18 -- # out=' 00:03:46.094 SPDK Configuration: 00:03:46.094 Core mask: 0x1 00:03:46.094 00:03:46.094 Accel Perf Configuration: 00:03:46.094 Workload Type: xor 00:03:46.094 Source buffers: 2 00:03:46.094 Transfer size: 4096 bytes 00:03:46.094 Vector count 1 00:03:46.094 Module: software 00:03:46.094 Queue depth: 32 00:03:46.094 Allocate depth: 32 00:03:46.094 # threads/core: 1 00:03:46.094 Run time: 1 seconds 00:03:46.094 Verify: Yes 00:03:46.094 00:03:46.094 Running for 1 seconds... 00:03:46.094 00:03:46.094 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:46.094 ------------------------------------------------------------------------------------ 00:03:46.094 0,0 1894432/s 7400 MiB/s 0 0 00:03:46.094 ==================================================================================== 00:03:46.094 Total 1894432/s 7400 MiB/s 0 0' 00:03:46.094 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.094 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.094 13:27:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:03:46.094 13:27:25 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.quLLFf -t 1 -w xor -y 00:03:46.094 [2024-07-10 13:27:25.185448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:46.094 [2024-07-10 13:27:25.185741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:46.358 EAL: TSC is not safe to use in SMP mode 00:03:46.358 EAL: TSC is not invariant 00:03:46.358 [2024-07-10 13:27:25.628952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.624 [2024-07-10 13:27:25.717785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.624 13:27:25 -- accel/accel.sh@12 -- # build_accel_config 00:03:46.624 13:27:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:46.624 13:27:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:46.624 13:27:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:46.624 13:27:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:46.624 13:27:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:46.624 13:27:25 -- accel/accel.sh@41 -- # local IFS=, 00:03:46.624 13:27:25 -- accel/accel.sh@42 -- # jq -r . 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=0x1 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=xor 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=2 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=software 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@23 -- # accel_module=software 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=32 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=32 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=1 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val=Yes 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:46.624 13:27:25 -- accel/accel.sh@21 -- # val= 00:03:46.624 13:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # IFS=: 00:03:46.624 13:27:25 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@21 -- # val= 00:03:47.561 13:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # IFS=: 00:03:47.561 13:27:26 -- accel/accel.sh@20 -- # read -r var val 00:03:47.561 13:27:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:47.561 13:27:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:03:47.561 13:27:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:47.561 00:03:47.561 real 0m3.353s 00:03:47.562 user 0m2.409s 00:03:47.562 sys 0m0.960s 00:03:47.562 13:27:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.562 13:27:26 -- common/autotest_common.sh@10 -- # set +x 00:03:47.562 ************************************ 00:03:47.562 END TEST accel_xor 00:03:47.562 ************************************ 00:03:47.562 13:27:26 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:03:47.562 13:27:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:47.562 13:27:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.562 13:27:26 -- common/autotest_common.sh@10 -- # set +x 00:03:47.562 ************************************ 00:03:47.562 START TEST accel_xor 00:03:47.562 ************************************ 00:03:47.562 13:27:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:03:47.562 13:27:26 -- accel/accel.sh@16 -- # local accel_opc 00:03:47.562 13:27:26 -- accel/accel.sh@17 -- # local accel_module 00:03:47.562 13:27:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:03:47.562 13:27:26 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.BMlWet -t 1 -w xor -y -x 3 00:03:47.821 [2024-07-10 13:27:26.924767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:47.821 [2024-07-10 13:27:26.925134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:48.080 EAL: TSC is not safe to use in SMP mode 00:03:48.080 EAL: TSC is not invariant 00:03:48.080 [2024-07-10 13:27:27.351862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.340 [2024-07-10 13:27:27.439454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.340 13:27:27 -- accel/accel.sh@12 -- # build_accel_config 00:03:48.340 13:27:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:48.340 13:27:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:48.340 13:27:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:48.340 13:27:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:48.340 13:27:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:48.340 13:27:27 -- accel/accel.sh@41 -- # local IFS=, 00:03:48.340 13:27:27 -- accel/accel.sh@42 -- # jq -r . 00:03:49.277 13:27:28 -- accel/accel.sh@18 -- # out=' 00:03:49.277 SPDK Configuration: 00:03:49.277 Core mask: 0x1 00:03:49.277 00:03:49.277 Accel Perf Configuration: 00:03:49.277 Workload Type: xor 00:03:49.277 Source buffers: 3 00:03:49.277 Transfer size: 4096 bytes 00:03:49.277 Vector count 1 00:03:49.277 Module: software 00:03:49.277 Queue depth: 32 00:03:49.277 Allocate depth: 32 00:03:49.277 # threads/core: 1 00:03:49.277 Run time: 1 seconds 00:03:49.277 Verify: Yes 00:03:49.277 00:03:49.277 Running for 1 seconds... 00:03:49.277 00:03:49.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:49.277 ------------------------------------------------------------------------------------ 00:03:49.277 0,0 1725440/s 6740 MiB/s 0 0 00:03:49.277 ==================================================================================== 00:03:49.277 Total 1725440/s 6740 MiB/s 0 0' 00:03:49.277 13:27:28 -- accel/accel.sh@20 -- # IFS=: 00:03:49.277 13:27:28 -- accel/accel.sh@20 -- # read -r var val 00:03:49.277 13:27:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:03:49.277 13:27:28 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.TVmH4E -t 1 -w xor -y -x 3 00:03:49.277 [2024-07-10 13:27:28.594237] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:49.277 [2024-07-10 13:27:28.594602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:49.847 EAL: TSC is not safe to use in SMP mode 00:03:49.847 EAL: TSC is not invariant 00:03:49.847 [2024-07-10 13:27:29.026767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.847 [2024-07-10 13:27:29.115522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.847 13:27:29 -- accel/accel.sh@12 -- # build_accel_config 00:03:49.847 13:27:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:49.847 13:27:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:49.847 13:27:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:49.847 13:27:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:49.847 13:27:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:49.847 13:27:29 -- accel/accel.sh@41 -- # local IFS=, 00:03:49.847 13:27:29 -- accel/accel.sh@42 -- # jq -r . 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=0x1 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=xor 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=3 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=software 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@23 -- # accel_module=software 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=32 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=32 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=1 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val=Yes 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:49.847 13:27:29 -- accel/accel.sh@21 -- # val= 00:03:49.847 13:27:29 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # IFS=: 00:03:49.847 13:27:29 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@21 -- # val= 00:03:51.227 13:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # IFS=: 00:03:51.227 13:27:30 -- accel/accel.sh@20 -- # read -r var val 00:03:51.227 13:27:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:51.227 13:27:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:03:51.227 13:27:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:51.227 00:03:51.227 real 0m3.346s 00:03:51.227 user 0m2.409s 00:03:51.227 sys 0m0.953s 00:03:51.227 13:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.227 13:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.227 ************************************ 00:03:51.227 END TEST accel_xor 00:03:51.227 ************************************ 00:03:51.227 13:27:30 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:03:51.227 13:27:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:03:51.227 13:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.227 13:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.227 ************************************ 00:03:51.227 START TEST accel_dif_verify 00:03:51.227 ************************************ 00:03:51.227 13:27:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:03:51.227 13:27:30 -- accel/accel.sh@16 -- # local accel_opc 00:03:51.227 13:27:30 -- accel/accel.sh@17 -- # local accel_module 00:03:51.227 13:27:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:03:51.227 13:27:30 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.qz85Mx -t 1 -w dif_verify 00:03:51.227 [2024-07-10 13:27:30.318097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:51.227 [2024-07-10 13:27:30.318477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:51.487 EAL: TSC is not safe to use in SMP mode 00:03:51.487 EAL: TSC is not invariant 00:03:51.487 [2024-07-10 13:27:30.753857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.487 [2024-07-10 13:27:30.840286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.487 13:27:30 -- accel/accel.sh@12 -- # build_accel_config 00:03:51.487 13:27:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:51.487 13:27:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:51.487 13:27:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:51.487 13:27:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:51.487 13:27:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:51.487 13:27:30 -- accel/accel.sh@41 -- # local IFS=, 00:03:51.487 13:27:30 -- accel/accel.sh@42 -- # jq -r . 00:03:52.863 13:27:31 -- accel/accel.sh@18 -- # out=' 00:03:52.863 SPDK Configuration: 00:03:52.863 Core mask: 0x1 00:03:52.863 00:03:52.863 Accel Perf Configuration: 00:03:52.863 Workload Type: dif_verify 00:03:52.863 Vector size: 4096 bytes 00:03:52.863 Transfer size: 4096 bytes 00:03:52.863 Block size: 512 bytes 00:03:52.863 Metadata size: 8 bytes 00:03:52.863 Vector count 1 00:03:52.863 Module: software 00:03:52.863 Queue depth: 32 00:03:52.863 Allocate depth: 32 00:03:52.863 # threads/core: 1 00:03:52.863 Run time: 1 seconds 00:03:52.863 Verify: No 00:03:52.863 00:03:52.863 Running for 1 seconds... 00:03:52.863 00:03:52.863 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:52.863 ------------------------------------------------------------------------------------ 00:03:52.863 0,0 1324704/s 5255 MiB/s 0 0 00:03:52.863 ==================================================================================== 00:03:52.863 Total 1324704/s 5174 MiB/s 0 0' 00:03:52.863 13:27:31 -- accel/accel.sh@20 -- # IFS=: 00:03:52.863 13:27:31 -- accel/accel.sh@20 -- # read -r var val 00:03:52.863 13:27:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:03:52.863 13:27:31 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NNkG2I -t 1 -w dif_verify 00:03:52.863 [2024-07-10 13:27:31.988182] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:52.863 [2024-07-10 13:27:31.988306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:53.122 EAL: TSC is not safe to use in SMP mode 00:03:53.122 EAL: TSC is not invariant 00:03:53.122 [2024-07-10 13:27:32.413476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.380 [2024-07-10 13:27:32.489913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.380 13:27:32 -- accel/accel.sh@12 -- # build_accel_config 00:03:53.380 13:27:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:53.380 13:27:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:53.380 13:27:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:53.380 13:27:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:53.380 13:27:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:53.380 13:27:32 -- accel/accel.sh@41 -- # local IFS=, 00:03:53.380 13:27:32 -- accel/accel.sh@42 -- # jq -r . 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=0x1 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=dif_verify 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=software 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@23 -- # accel_module=software 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=32 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=32 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=1 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val=No 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:53.380 13:27:32 -- accel/accel.sh@21 -- # val= 00:03:53.380 13:27:32 -- accel/accel.sh@22 -- # case "$var" in 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # IFS=: 00:03:53.380 13:27:32 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@21 -- # val= 00:03:54.317 13:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # IFS=: 00:03:54.317 13:27:33 -- accel/accel.sh@20 -- # read -r var val 00:03:54.317 13:27:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:54.317 13:27:33 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:03:54.317 13:27:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:54.317 00:03:54.317 real 0m3.332s 00:03:54.317 user 0m2.393s 00:03:54.317 sys 0m0.957s 00:03:54.317 13:27:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.317 13:27:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.317 ************************************ 00:03:54.317 END TEST accel_dif_verify 00:03:54.317 ************************************ 00:03:54.575 13:27:33 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:03:54.575 13:27:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:03:54.575 13:27:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.575 13:27:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.575 ************************************ 00:03:54.575 START TEST accel_dif_generate 00:03:54.575 ************************************ 00:03:54.575 13:27:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:03:54.575 13:27:33 -- accel/accel.sh@16 -- # local accel_opc 00:03:54.575 13:27:33 -- accel/accel.sh@17 -- # local accel_module 00:03:54.575 13:27:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:03:54.575 13:27:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2Kmu4e -t 1 -w dif_generate 00:03:54.575 [2024-07-10 13:27:33.697509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:54.575 [2024-07-10 13:27:33.697893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:54.834 EAL: TSC is not safe to use in SMP mode 00:03:54.834 EAL: TSC is not invariant 00:03:54.834 [2024-07-10 13:27:34.133188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.093 [2024-07-10 13:27:34.222226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.093 13:27:34 -- accel/accel.sh@12 -- # build_accel_config 00:03:55.093 13:27:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:55.093 13:27:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:55.093 13:27:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:55.093 13:27:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:55.093 13:27:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:55.093 13:27:34 -- accel/accel.sh@41 -- # local IFS=, 00:03:55.093 13:27:34 -- accel/accel.sh@42 -- # jq -r . 00:03:56.029 13:27:35 -- accel/accel.sh@18 -- # out=' 00:03:56.029 SPDK Configuration: 00:03:56.029 Core mask: 0x1 00:03:56.029 00:03:56.029 Accel Perf Configuration: 00:03:56.029 Workload Type: dif_generate 00:03:56.029 Vector size: 4096 bytes 00:03:56.029 Transfer size: 4096 bytes 00:03:56.029 Block size: 512 bytes 00:03:56.029 Metadata size: 8 bytes 00:03:56.029 Vector count 1 00:03:56.029 Module: software 00:03:56.029 Queue depth: 32 00:03:56.029 Allocate depth: 32 00:03:56.029 # threads/core: 1 00:03:56.029 Run time: 1 seconds 00:03:56.029 Verify: No 00:03:56.029 00:03:56.029 Running for 1 seconds... 00:03:56.029 00:03:56.029 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:56.029 ------------------------------------------------------------------------------------ 00:03:56.029 0,0 1512704/s 6001 MiB/s 0 0 00:03:56.029 ==================================================================================== 00:03:56.029 Total 1512704/s 5909 MiB/s 0 0' 00:03:56.029 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.029 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.029 13:27:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:03:56.029 13:27:35 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ma6djs -t 1 -w dif_generate 00:03:56.029 [2024-07-10 13:27:35.366712] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:56.029 [2024-07-10 13:27:35.366863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:56.596 EAL: TSC is not safe to use in SMP mode 00:03:56.596 EAL: TSC is not invariant 00:03:56.596 [2024-07-10 13:27:35.795875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.596 [2024-07-10 13:27:35.872856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.596 13:27:35 -- accel/accel.sh@12 -- # build_accel_config 00:03:56.596 13:27:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:56.596 13:27:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:56.596 13:27:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:56.596 13:27:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:56.596 13:27:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:56.596 13:27:35 -- accel/accel.sh@41 -- # local IFS=, 00:03:56.596 13:27:35 -- accel/accel.sh@42 -- # jq -r . 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=0x1 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=dif_generate 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=software 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@23 -- # accel_module=software 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=32 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=32 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=1 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val=No 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.596 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.596 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:56.596 13:27:35 -- accel/accel.sh@21 -- # val= 00:03:56.597 13:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.597 13:27:35 -- accel/accel.sh@20 -- # IFS=: 00:03:56.597 13:27:35 -- accel/accel.sh@20 -- # read -r var val 00:03:57.972 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.972 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.972 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.972 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.972 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.972 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.972 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.972 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.973 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.973 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.973 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.973 13:27:37 -- accel/accel.sh@21 -- # val= 00:03:57.973 13:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # IFS=: 00:03:57.973 13:27:37 -- accel/accel.sh@20 -- # read -r var val 00:03:57.973 13:27:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:57.973 13:27:37 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:03:57.973 13:27:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:57.973 00:03:57.973 real 0m3.328s 00:03:57.973 user 0m2.390s 00:03:57.973 sys 0m0.955s 00:03:57.973 13:27:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.973 13:27:37 -- common/autotest_common.sh@10 -- # set +x 00:03:57.973 ************************************ 00:03:57.973 END TEST accel_dif_generate 00:03:57.973 ************************************ 00:03:57.973 13:27:37 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:03:57.973 13:27:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:03:57.973 13:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.973 13:27:37 -- common/autotest_common.sh@10 -- # set +x 00:03:57.973 ************************************ 00:03:57.973 START TEST accel_dif_generate_copy 00:03:57.973 ************************************ 00:03:57.973 13:27:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:03:57.973 13:27:37 -- accel/accel.sh@16 -- # local accel_opc 00:03:57.973 13:27:37 -- accel/accel.sh@17 -- # local accel_module 00:03:57.973 13:27:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:03:57.973 13:27:37 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CEVHBk -t 1 -w dif_generate_copy 00:03:57.973 [2024-07-10 13:27:37.077840] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:57.973 [2024-07-10 13:27:37.078194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:58.232 EAL: TSC is not safe to use in SMP mode 00:03:58.232 EAL: TSC is not invariant 00:03:58.232 [2024-07-10 13:27:37.515651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.495 [2024-07-10 13:27:37.603701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.495 13:27:37 -- accel/accel.sh@12 -- # build_accel_config 00:03:58.495 13:27:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:58.495 13:27:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:58.495 13:27:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:58.495 13:27:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:58.495 13:27:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:58.495 13:27:37 -- accel/accel.sh@41 -- # local IFS=, 00:03:58.495 13:27:37 -- accel/accel.sh@42 -- # jq -r . 00:03:59.439 13:27:38 -- accel/accel.sh@18 -- # out=' 00:03:59.439 SPDK Configuration: 00:03:59.439 Core mask: 0x1 00:03:59.439 00:03:59.439 Accel Perf Configuration: 00:03:59.439 Workload Type: dif_generate_copy 00:03:59.439 Vector size: 4096 bytes 00:03:59.439 Transfer size: 4096 bytes 00:03:59.439 Vector count 1 00:03:59.439 Module: software 00:03:59.439 Queue depth: 32 00:03:59.439 Allocate depth: 32 00:03:59.439 # threads/core: 1 00:03:59.439 Run time: 1 seconds 00:03:59.439 Verify: No 00:03:59.439 00:03:59.439 Running for 1 seconds... 00:03:59.439 00:03:59.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:59.439 ------------------------------------------------------------------------------------ 00:03:59.439 0,0 1189664/s 4719 MiB/s 0 0 00:03:59.439 ==================================================================================== 00:03:59.439 Total 1189664/s 4647 MiB/s 0 0' 00:03:59.439 13:27:38 -- accel/accel.sh@20 -- # IFS=: 00:03:59.439 13:27:38 -- accel/accel.sh@20 -- # read -r var val 00:03:59.439 13:27:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:03:59.439 13:27:38 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RJDPeD -t 1 -w dif_generate_copy 00:03:59.439 [2024-07-10 13:27:38.759544] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:59.439 [2024-07-10 13:27:38.759892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:00.008 EAL: TSC is not safe to use in SMP mode 00:04:00.008 EAL: TSC is not invariant 00:04:00.008 [2024-07-10 13:27:39.199406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.008 [2024-07-10 13:27:39.287143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.008 13:27:39 -- accel/accel.sh@12 -- # build_accel_config 00:04:00.008 13:27:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:00.008 13:27:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:00.008 13:27:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:00.008 13:27:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:00.008 13:27:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:00.008 13:27:39 -- accel/accel.sh@41 -- # local IFS=, 00:04:00.008 13:27:39 -- accel/accel.sh@42 -- # jq -r . 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=0x1 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=software 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@23 -- # accel_module=software 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=32 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=32 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=1 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val=No 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:00.008 13:27:39 -- accel/accel.sh@21 -- # val= 00:04:00.008 13:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # IFS=: 00:04:00.008 13:27:39 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@21 -- # val= 00:04:01.390 13:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # IFS=: 00:04:01.390 13:27:40 -- accel/accel.sh@20 -- # read -r var val 00:04:01.390 13:27:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:01.390 13:27:40 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:04:01.390 13:27:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:01.390 00:04:01.390 real 0m3.362s 00:04:01.390 user 0m2.406s 00:04:01.390 sys 0m0.964s 00:04:01.390 13:27:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.390 13:27:40 -- common/autotest_common.sh@10 -- # set +x 00:04:01.390 ************************************ 00:04:01.390 END TEST accel_dif_generate_copy 00:04:01.390 ************************************ 00:04:01.390 13:27:40 -- accel/accel.sh@107 -- # [[ y == y ]] 00:04:01.390 13:27:40 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:01.390 13:27:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:01.390 13:27:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.390 13:27:40 -- common/autotest_common.sh@10 -- # set +x 00:04:01.390 ************************************ 00:04:01.390 START TEST accel_comp 00:04:01.390 ************************************ 00:04:01.390 13:27:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:01.390 13:27:40 -- accel/accel.sh@16 -- # local accel_opc 00:04:01.390 13:27:40 -- accel/accel.sh@17 -- # local accel_module 00:04:01.390 13:27:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:01.390 13:27:40 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZuuTZt -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:01.390 [2024-07-10 13:27:40.476530] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:01.390 [2024-07-10 13:27:40.476880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:01.649 EAL: TSC is not safe to use in SMP mode 00:04:01.649 EAL: TSC is not invariant 00:04:01.649 [2024-07-10 13:27:40.924499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.909 [2024-07-10 13:27:41.012324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.909 13:27:41 -- accel/accel.sh@12 -- # build_accel_config 00:04:01.909 13:27:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:01.909 13:27:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:01.909 13:27:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:01.909 13:27:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:01.909 13:27:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:01.909 13:27:41 -- accel/accel.sh@41 -- # local IFS=, 00:04:01.909 13:27:41 -- accel/accel.sh@42 -- # jq -r . 00:04:02.848 13:27:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:02.848 00:04:02.848 SPDK Configuration: 00:04:02.848 Core mask: 0x1 00:04:02.848 00:04:02.848 Accel Perf Configuration: 00:04:02.848 Workload Type: compress 00:04:02.848 Transfer size: 4096 bytes 00:04:02.848 Vector count 1 00:04:02.848 Module: software 00:04:02.848 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:02.848 Queue depth: 32 00:04:02.848 Allocate depth: 32 00:04:02.848 # threads/core: 1 00:04:02.848 Run time: 1 seconds 00:04:02.848 Verify: No 00:04:02.848 00:04:02.848 Running for 1 seconds... 00:04:02.848 00:04:02.848 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:02.848 ------------------------------------------------------------------------------------ 00:04:02.848 0,0 58976/s 245 MiB/s 0 0 00:04:02.848 ==================================================================================== 00:04:02.848 Total 58976/s 230 MiB/s 0 0' 00:04:02.848 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:02.848 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:02.848 13:27:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:02.848 13:27:42 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.z99TJp -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:02.848 [2024-07-10 13:27:42.165211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:02.848 [2024-07-10 13:27:42.165589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:03.415 EAL: TSC is not safe to use in SMP mode 00:04:03.415 EAL: TSC is not invariant 00:04:03.415 [2024-07-10 13:27:42.610020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.415 [2024-07-10 13:27:42.700050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.415 13:27:42 -- accel/accel.sh@12 -- # build_accel_config 00:04:03.415 13:27:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:03.415 13:27:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:03.415 13:27:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:03.415 13:27:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:03.415 13:27:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:03.415 13:27:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:03.415 13:27:42 -- accel/accel.sh@42 -- # jq -r . 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val=0x1 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val=compress 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@24 -- # accel_opc=compress 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.415 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.415 13:27:42 -- accel/accel.sh@21 -- # val=software 00:04:03.415 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@23 -- # accel_module=software 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val=32 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val=32 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val=1 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val=No 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:03.416 13:27:42 -- accel/accel.sh@21 -- # val= 00:04:03.416 13:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # IFS=: 00:04:03.416 13:27:42 -- accel/accel.sh@20 -- # read -r var val 00:04:04.789 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.789 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.789 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.789 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.789 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.789 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.789 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.789 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.789 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.789 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.790 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.790 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.790 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.790 13:27:43 -- accel/accel.sh@21 -- # val= 00:04:04.790 13:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:04:04.790 13:27:43 -- accel/accel.sh@20 -- # IFS=: 00:04:04.790 13:27:43 -- accel/accel.sh@20 -- # read -r var val 00:04:04.790 13:27:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:04.790 13:27:43 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:04:04.790 13:27:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:04.790 00:04:04.790 real 0m3.381s 00:04:04.790 user 0m2.399s 00:04:04.790 sys 0m0.992s 00:04:04.790 13:27:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.790 13:27:43 -- common/autotest_common.sh@10 -- # set +x 00:04:04.790 ************************************ 00:04:04.790 END TEST accel_comp 00:04:04.790 ************************************ 00:04:04.790 13:27:43 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:04.790 13:27:43 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:04.790 13:27:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.790 13:27:43 -- common/autotest_common.sh@10 -- # set +x 00:04:04.790 ************************************ 00:04:04.790 START TEST accel_decomp 00:04:04.790 ************************************ 00:04:04.790 13:27:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:04.790 13:27:43 -- accel/accel.sh@16 -- # local accel_opc 00:04:04.790 13:27:43 -- accel/accel.sh@17 -- # local accel_module 00:04:04.790 13:27:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:04.790 13:27:43 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fNH580 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:04.790 [2024-07-10 13:27:43.903410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:04.790 [2024-07-10 13:27:43.903731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:05.048 EAL: TSC is not safe to use in SMP mode 00:04:05.048 EAL: TSC is not invariant 00:04:05.048 [2024-07-10 13:27:44.338723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.307 [2024-07-10 13:27:44.430509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.307 13:27:44 -- accel/accel.sh@12 -- # build_accel_config 00:04:05.307 13:27:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:05.307 13:27:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:05.307 13:27:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:05.307 13:27:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:05.307 13:27:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:05.307 13:27:44 -- accel/accel.sh@41 -- # local IFS=, 00:04:05.307 13:27:44 -- accel/accel.sh@42 -- # jq -r . 00:04:06.245 13:27:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:06.245 00:04:06.245 SPDK Configuration: 00:04:06.245 Core mask: 0x1 00:04:06.245 00:04:06.245 Accel Perf Configuration: 00:04:06.245 Workload Type: decompress 00:04:06.245 Transfer size: 4096 bytes 00:04:06.245 Vector count 1 00:04:06.245 Module: software 00:04:06.245 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:06.245 Queue depth: 32 00:04:06.245 Allocate depth: 32 00:04:06.245 # threads/core: 1 00:04:06.245 Run time: 1 seconds 00:04:06.245 Verify: Yes 00:04:06.245 00:04:06.245 Running for 1 seconds... 00:04:06.245 00:04:06.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:06.246 ------------------------------------------------------------------------------------ 00:04:06.246 0,0 82336/s 151 MiB/s 0 0 00:04:06.246 ==================================================================================== 00:04:06.246 Total 82336/s 321 MiB/s 0 0' 00:04:06.246 13:27:45 -- accel/accel.sh@20 -- # IFS=: 00:04:06.246 13:27:45 -- accel/accel.sh@20 -- # read -r var val 00:04:06.246 13:27:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:06.246 13:27:45 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.SO9MFA -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:06.246 [2024-07-10 13:27:45.583357] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:06.246 [2024-07-10 13:27:45.583709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:06.812 EAL: TSC is not safe to use in SMP mode 00:04:06.812 EAL: TSC is not invariant 00:04:06.812 [2024-07-10 13:27:46.020076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.812 [2024-07-10 13:27:46.109688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.812 13:27:46 -- accel/accel.sh@12 -- # build_accel_config 00:04:06.812 13:27:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:06.812 13:27:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:06.812 13:27:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:06.812 13:27:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:06.812 13:27:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:06.812 13:27:46 -- accel/accel.sh@41 -- # local IFS=, 00:04:06.812 13:27:46 -- accel/accel.sh@42 -- # jq -r . 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val=0x1 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val=decompress 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val=software 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@23 -- # accel_module=software 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:06.812 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.812 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.812 13:27:46 -- accel/accel.sh@21 -- # val=32 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val=32 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val=1 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val=Yes 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:06.813 13:27:46 -- accel/accel.sh@21 -- # val= 00:04:06.813 13:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # IFS=: 00:04:06.813 13:27:46 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@21 -- # val= 00:04:08.186 13:27:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # IFS=: 00:04:08.186 13:27:47 -- accel/accel.sh@20 -- # read -r var val 00:04:08.186 13:27:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:08.186 13:27:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:08.186 13:27:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:08.186 00:04:08.186 real 0m3.361s 00:04:08.186 user 0m2.414s 00:04:08.186 sys 0m0.963s 00:04:08.186 13:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.186 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:04:08.186 ************************************ 00:04:08.186 END TEST accel_decomp 00:04:08.186 ************************************ 00:04:08.186 13:27:47 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:08.186 13:27:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:08.186 13:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.186 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:04:08.186 ************************************ 00:04:08.186 START TEST accel_decmop_full 00:04:08.186 ************************************ 00:04:08.186 13:27:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:08.186 13:27:47 -- accel/accel.sh@16 -- # local accel_opc 00:04:08.186 13:27:47 -- accel/accel.sh@17 -- # local accel_module 00:04:08.186 13:27:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:08.186 13:27:47 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hibqUQ -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:08.186 [2024-07-10 13:27:47.312299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:08.186 [2024-07-10 13:27:47.312660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:08.443 EAL: TSC is not safe to use in SMP mode 00:04:08.443 EAL: TSC is not invariant 00:04:08.443 [2024-07-10 13:27:47.744395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.701 [2024-07-10 13:27:47.834062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.701 13:27:47 -- accel/accel.sh@12 -- # build_accel_config 00:04:08.701 13:27:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:08.701 13:27:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:08.701 13:27:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:08.701 13:27:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:08.701 13:27:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:08.701 13:27:47 -- accel/accel.sh@41 -- # local IFS=, 00:04:08.701 13:27:47 -- accel/accel.sh@42 -- # jq -r . 00:04:09.635 13:27:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:09.635 00:04:09.635 SPDK Configuration: 00:04:09.635 Core mask: 0x1 00:04:09.635 00:04:09.635 Accel Perf Configuration: 00:04:09.635 Workload Type: decompress 00:04:09.635 Transfer size: 111250 bytes 00:04:09.635 Vector count 1 00:04:09.635 Module: software 00:04:09.635 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:09.635 Queue depth: 32 00:04:09.635 Allocate depth: 32 00:04:09.635 # threads/core: 1 00:04:09.635 Run time: 1 seconds 00:04:09.635 Verify: Yes 00:04:09.635 00:04:09.635 Running for 1 seconds... 00:04:09.635 00:04:09.635 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:09.635 ------------------------------------------------------------------------------------ 00:04:09.635 0,0 4992/s 206 MiB/s 0 0 00:04:09.635 ==================================================================================== 00:04:09.635 Total 4992/s 529 MiB/s 0 0' 00:04:09.893 13:27:48 -- accel/accel.sh@20 -- # IFS=: 00:04:09.893 13:27:48 -- accel/accel.sh@20 -- # read -r var val 00:04:09.893 13:27:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:09.894 13:27:48 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.R0TVdC -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:09.894 [2024-07-10 13:27:49.003100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:09.894 [2024-07-10 13:27:49.003460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:10.151 EAL: TSC is not safe to use in SMP mode 00:04:10.151 EAL: TSC is not invariant 00:04:10.151 [2024-07-10 13:27:49.439584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.410 [2024-07-10 13:27:49.529816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.410 13:27:49 -- accel/accel.sh@12 -- # build_accel_config 00:04:10.410 13:27:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:10.410 13:27:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:10.411 13:27:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:10.411 13:27:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:10.411 13:27:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:10.411 13:27:49 -- accel/accel.sh@41 -- # local IFS=, 00:04:10.411 13:27:49 -- accel/accel.sh@42 -- # jq -r . 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=0x1 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=decompress 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=software 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@23 -- # accel_module=software 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=32 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=32 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=1 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val=Yes 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:10.411 13:27:49 -- accel/accel.sh@21 -- # val= 00:04:10.411 13:27:49 -- accel/accel.sh@22 -- # case "$var" in 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # IFS=: 00:04:10.411 13:27:49 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@21 -- # val= 00:04:11.349 13:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # IFS=: 00:04:11.349 13:27:50 -- accel/accel.sh@20 -- # read -r var val 00:04:11.349 13:27:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:11.349 13:27:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:11.349 13:27:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:11.349 00:04:11.349 real 0m3.386s 00:04:11.349 user 0m2.443s 00:04:11.349 sys 0m0.958s 00:04:11.349 13:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.349 13:27:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.349 ************************************ 00:04:11.349 END TEST accel_decmop_full 00:04:11.349 ************************************ 00:04:11.608 13:27:50 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:11.608 13:27:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:11.608 13:27:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.608 13:27:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.608 ************************************ 00:04:11.608 START TEST accel_decomp_mcore 00:04:11.608 ************************************ 00:04:11.608 13:27:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:11.608 13:27:50 -- accel/accel.sh@16 -- # local accel_opc 00:04:11.608 13:27:50 -- accel/accel.sh@17 -- # local accel_module 00:04:11.608 13:27:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:11.608 13:27:50 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iwJ5ZF -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:11.608 [2024-07-10 13:27:50.748476] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:11.608 [2024-07-10 13:27:50.748837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:11.866 EAL: TSC is not safe to use in SMP mode 00:04:11.866 EAL: TSC is not invariant 00:04:11.866 [2024-07-10 13:27:51.201655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:12.125 [2024-07-10 13:27:51.298182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.125 [2024-07-10 13:27:51.298505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.125 [2024-07-10 13:27:51.298354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.125 [2024-07-10 13:27:51.298507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:12.125 13:27:51 -- accel/accel.sh@12 -- # build_accel_config 00:04:12.125 13:27:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:12.125 13:27:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:12.125 13:27:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:12.125 13:27:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:12.125 13:27:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:12.125 13:27:51 -- accel/accel.sh@41 -- # local IFS=, 00:04:12.125 13:27:51 -- accel/accel.sh@42 -- # jq -r . 00:04:13.506 13:27:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:13.506 00:04:13.506 SPDK Configuration: 00:04:13.506 Core mask: 0xf 00:04:13.506 00:04:13.506 Accel Perf Configuration: 00:04:13.507 Workload Type: decompress 00:04:13.507 Transfer size: 4096 bytes 00:04:13.507 Vector count 1 00:04:13.507 Module: software 00:04:13.507 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:13.507 Queue depth: 32 00:04:13.507 Allocate depth: 32 00:04:13.507 # threads/core: 1 00:04:13.507 Run time: 1 seconds 00:04:13.507 Verify: Yes 00:04:13.507 00:04:13.507 Running for 1 seconds... 00:04:13.507 00:04:13.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:13.507 ------------------------------------------------------------------------------------ 00:04:13.507 0,0 73184/s 134 MiB/s 0 0 00:04:13.507 3,0 75008/s 138 MiB/s 0 0 00:04:13.507 2,0 75072/s 138 MiB/s 0 0 00:04:13.507 1,0 74912/s 138 MiB/s 0 0 00:04:13.507 ==================================================================================== 00:04:13.507 Total 298176/s 1164 MiB/s 0 0' 00:04:13.507 13:27:52 -- accel/accel.sh@20 -- # IFS=: 00:04:13.507 13:27:52 -- accel/accel.sh@20 -- # read -r var val 00:04:13.507 13:27:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:13.507 13:27:52 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.K4cs1r -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:13.507 [2024-07-10 13:27:52.458989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:13.507 [2024-07-10 13:27:52.459360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:13.767 EAL: TSC is not safe to use in SMP mode 00:04:13.767 EAL: TSC is not invariant 00:04:13.767 [2024-07-10 13:27:52.902733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:13.767 [2024-07-10 13:27:52.993860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.767 [2024-07-10 13:27:52.994111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.767 [2024-07-10 13:27:52.993965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.767 [2024-07-10 13:27:52.994112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.767 13:27:52 -- accel/accel.sh@12 -- # build_accel_config 00:04:13.767 13:27:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:13.767 13:27:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:13.767 13:27:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:13.767 13:27:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:13.767 13:27:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:13.767 13:27:52 -- accel/accel.sh@41 -- # local IFS=, 00:04:13.767 13:27:52 -- accel/accel.sh@42 -- # jq -r . 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=0xf 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=decompress 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=software 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@23 -- # accel_module=software 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=32 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=32 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=1 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val=Yes 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:13.767 13:27:53 -- accel/accel.sh@21 -- # val= 00:04:13.767 13:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # IFS=: 00:04:13.767 13:27:53 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@21 -- # val= 00:04:15.148 13:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # IFS=: 00:04:15.148 13:27:54 -- accel/accel.sh@20 -- # read -r var val 00:04:15.148 13:27:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:15.148 13:27:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:15.148 13:27:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:15.148 00:04:15.148 real 0m3.405s 00:04:15.148 user 0m8.665s 00:04:15.148 sys 0m1.027s 00:04:15.148 13:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.148 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.148 ************************************ 00:04:15.148 END TEST accel_decomp_mcore 00:04:15.148 ************************************ 00:04:15.148 13:27:54 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:15.148 13:27:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:04:15.148 13:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.148 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.148 ************************************ 00:04:15.148 START TEST accel_decomp_full_mcore 00:04:15.148 ************************************ 00:04:15.148 13:27:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:15.148 13:27:54 -- accel/accel.sh@16 -- # local accel_opc 00:04:15.148 13:27:54 -- accel/accel.sh@17 -- # local accel_module 00:04:15.148 13:27:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:15.148 13:27:54 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.x8cZcT -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:15.148 [2024-07-10 13:27:54.205781] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:15.148 [2024-07-10 13:27:54.206129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:15.407 EAL: TSC is not safe to use in SMP mode 00:04:15.407 EAL: TSC is not invariant 00:04:15.407 [2024-07-10 13:27:54.655347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.407 [2024-07-10 13:27:54.746109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.407 [2024-07-10 13:27:54.746426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.407 [2024-07-10 13:27:54.746275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.407 [2024-07-10 13:27:54.746428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.407 13:27:54 -- accel/accel.sh@12 -- # build_accel_config 00:04:15.407 13:27:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:15.407 13:27:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:15.407 13:27:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:15.407 13:27:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:15.407 13:27:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:15.407 13:27:54 -- accel/accel.sh@41 -- # local IFS=, 00:04:15.407 13:27:54 -- accel/accel.sh@42 -- # jq -r . 00:04:16.794 13:27:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:16.794 00:04:16.794 SPDK Configuration: 00:04:16.794 Core mask: 0xf 00:04:16.794 00:04:16.794 Accel Perf Configuration: 00:04:16.794 Workload Type: decompress 00:04:16.794 Transfer size: 111250 bytes 00:04:16.794 Vector count 1 00:04:16.794 Module: software 00:04:16.794 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:16.794 Queue depth: 32 00:04:16.794 Allocate depth: 32 00:04:16.794 # threads/core: 1 00:04:16.794 Run time: 1 seconds 00:04:16.794 Verify: Yes 00:04:16.794 00:04:16.794 Running for 1 seconds... 00:04:16.794 00:04:16.794 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:16.794 ------------------------------------------------------------------------------------ 00:04:16.794 0,0 4096/s 169 MiB/s 0 0 00:04:16.794 3,0 4256/s 175 MiB/s 0 0 00:04:16.794 2,0 4224/s 174 MiB/s 0 0 00:04:16.794 1,0 4224/s 174 MiB/s 0 0 00:04:16.794 ==================================================================================== 00:04:16.794 Total 16800/s 1782 MiB/s 0 0' 00:04:16.794 13:27:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:16.794 13:27:55 -- accel/accel.sh@20 -- # IFS=: 00:04:16.794 13:27:55 -- accel/accel.sh@20 -- # read -r var val 00:04:16.794 13:27:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.EYChBT -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:16.794 [2024-07-10 13:27:55.915362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:16.794 [2024-07-10 13:27:55.915517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:17.054 EAL: TSC is not safe to use in SMP mode 00:04:17.054 EAL: TSC is not invariant 00:04:17.054 [2024-07-10 13:27:56.352275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.313 [2024-07-10 13:27:56.434500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.313 [2024-07-10 13:27:56.434738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.313 [2024-07-10 13:27:56.434622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.313 [2024-07-10 13:27:56.434738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.313 13:27:56 -- accel/accel.sh@12 -- # build_accel_config 00:04:17.313 13:27:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:17.313 13:27:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:17.313 13:27:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:17.313 13:27:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:17.313 13:27:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:17.313 13:27:56 -- accel/accel.sh@41 -- # local IFS=, 00:04:17.313 13:27:56 -- accel/accel.sh@42 -- # jq -r . 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val=0xf 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.313 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.313 13:27:56 -- accel/accel.sh@21 -- # val=decompress 00:04:17.313 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.313 13:27:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=software 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@23 -- # accel_module=software 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=32 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=32 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=1 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val=Yes 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:17.314 13:27:56 -- accel/accel.sh@21 -- # val= 00:04:17.314 13:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # IFS=: 00:04:17.314 13:27:56 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@21 -- # val= 00:04:18.250 13:27:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # IFS=: 00:04:18.250 13:27:57 -- accel/accel.sh@20 -- # read -r var val 00:04:18.250 13:27:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:18.250 13:27:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:18.250 13:27:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:18.250 00:04:18.250 real 0m3.398s 00:04:18.250 user 0m8.763s 00:04:18.250 sys 0m0.991s 00:04:18.250 13:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.250 13:27:57 -- common/autotest_common.sh@10 -- # set +x 00:04:18.250 ************************************ 00:04:18.250 END TEST accel_decomp_full_mcore 00:04:18.250 ************************************ 00:04:18.509 13:27:57 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:18.509 13:27:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:18.509 13:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.509 13:27:57 -- common/autotest_common.sh@10 -- # set +x 00:04:18.509 ************************************ 00:04:18.509 START TEST accel_decomp_mthread 00:04:18.509 ************************************ 00:04:18.509 13:27:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:18.509 13:27:57 -- accel/accel.sh@16 -- # local accel_opc 00:04:18.509 13:27:57 -- accel/accel.sh@17 -- # local accel_module 00:04:18.509 13:27:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:18.509 13:27:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.F9Fy5x -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:18.509 [2024-07-10 13:27:57.644779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:18.509 [2024-07-10 13:27:57.645128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:18.767 EAL: TSC is not safe to use in SMP mode 00:04:18.767 EAL: TSC is not invariant 00:04:18.767 [2024-07-10 13:27:58.077847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.025 [2024-07-10 13:27:58.168584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.025 13:27:58 -- accel/accel.sh@12 -- # build_accel_config 00:04:19.025 13:27:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:19.025 13:27:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:19.025 13:27:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:19.025 13:27:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:19.025 13:27:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:19.025 13:27:58 -- accel/accel.sh@41 -- # local IFS=, 00:04:19.025 13:27:58 -- accel/accel.sh@42 -- # jq -r . 00:04:20.447 13:27:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:20.447 00:04:20.447 SPDK Configuration: 00:04:20.447 Core mask: 0x1 00:04:20.447 00:04:20.447 Accel Perf Configuration: 00:04:20.447 Workload Type: decompress 00:04:20.447 Transfer size: 4096 bytes 00:04:20.447 Vector count 1 00:04:20.447 Module: software 00:04:20.447 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:20.447 Queue depth: 32 00:04:20.447 Allocate depth: 32 00:04:20.447 # threads/core: 2 00:04:20.447 Run time: 1 seconds 00:04:20.447 Verify: Yes 00:04:20.447 00:04:20.447 Running for 1 seconds... 00:04:20.447 00:04:20.447 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:20.447 ------------------------------------------------------------------------------------ 00:04:20.447 0,1 40544/s 74 MiB/s 0 0 00:04:20.447 0,0 40448/s 74 MiB/s 0 0 00:04:20.447 ==================================================================================== 00:04:20.447 Total 80992/s 316 MiB/s 0 0' 00:04:20.447 13:27:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:20.447 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.447 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.447 13:27:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.IJD8GY -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:20.447 [2024-07-10 13:27:59.327267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:20.447 [2024-07-10 13:27:59.327620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:20.447 EAL: TSC is not safe to use in SMP mode 00:04:20.447 EAL: TSC is not invariant 00:04:20.447 [2024-07-10 13:27:59.776066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.705 [2024-07-10 13:27:59.866499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.705 13:27:59 -- accel/accel.sh@12 -- # build_accel_config 00:04:20.705 13:27:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:20.705 13:27:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:20.705 13:27:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:20.705 13:27:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:20.705 13:27:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:20.705 13:27:59 -- accel/accel.sh@41 -- # local IFS=, 00:04:20.705 13:27:59 -- accel/accel.sh@42 -- # jq -r . 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=0x1 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=decompress 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=software 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@23 -- # accel_module=software 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=32 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=32 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=2 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val=Yes 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:20.705 13:27:59 -- accel/accel.sh@21 -- # val= 00:04:20.705 13:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # IFS=: 00:04:20.705 13:27:59 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@21 -- # val= 00:04:22.078 13:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # IFS=: 00:04:22.078 13:28:01 -- accel/accel.sh@20 -- # read -r var val 00:04:22.078 13:28:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:22.078 13:28:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:22.078 13:28:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:22.078 00:04:22.078 real 0m3.389s 00:04:22.078 user 0m2.435s 00:04:22.078 sys 0m0.972s 00:04:22.078 13:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.078 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 END TEST accel_decomp_mthread 00:04:22.078 ************************************ 00:04:22.078 13:28:01 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:22.078 13:28:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:04:22.078 13:28:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.078 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 START TEST accel_deomp_full_mthread 00:04:22.078 ************************************ 00:04:22.078 13:28:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:22.078 13:28:01 -- accel/accel.sh@16 -- # local accel_opc 00:04:22.078 13:28:01 -- accel/accel.sh@17 -- # local accel_module 00:04:22.078 13:28:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:22.078 13:28:01 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.s5YPHE -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:22.078 [2024-07-10 13:28:01.081323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:22.078 [2024-07-10 13:28:01.081681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:22.336 EAL: TSC is not safe to use in SMP mode 00:04:22.336 EAL: TSC is not invariant 00:04:22.336 [2024-07-10 13:28:01.522744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.336 [2024-07-10 13:28:01.613901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.336 13:28:01 -- accel/accel.sh@12 -- # build_accel_config 00:04:22.336 13:28:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:22.336 13:28:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:22.336 13:28:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:22.336 13:28:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:22.336 13:28:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:22.336 13:28:01 -- accel/accel.sh@41 -- # local IFS=, 00:04:22.336 13:28:01 -- accel/accel.sh@42 -- # jq -r . 00:04:23.719 13:28:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:23.719 00:04:23.719 SPDK Configuration: 00:04:23.719 Core mask: 0x1 00:04:23.719 00:04:23.719 Accel Perf Configuration: 00:04:23.719 Workload Type: decompress 00:04:23.719 Transfer size: 111250 bytes 00:04:23.719 Vector count 1 00:04:23.719 Module: software 00:04:23.719 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:23.719 Queue depth: 32 00:04:23.719 Allocate depth: 32 00:04:23.719 # threads/core: 2 00:04:23.719 Run time: 1 seconds 00:04:23.719 Verify: Yes 00:04:23.719 00:04:23.719 Running for 1 seconds... 00:04:23.719 00:04:23.719 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:23.719 ------------------------------------------------------------------------------------ 00:04:23.719 0,1 2336/s 96 MiB/s 0 0 00:04:23.719 0,0 2304/s 95 MiB/s 0 0 00:04:23.719 ==================================================================================== 00:04:23.719 Total 4640/s 492 MiB/s 0 0' 00:04:23.719 13:28:02 -- accel/accel.sh@20 -- # IFS=: 00:04:23.719 13:28:02 -- accel/accel.sh@20 -- # read -r var val 00:04:23.719 13:28:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:23.719 13:28:02 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.LrMgGX -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:23.719 [2024-07-10 13:28:02.792785] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:23.719 [2024-07-10 13:28:02.793168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:23.977 EAL: TSC is not safe to use in SMP mode 00:04:23.978 EAL: TSC is not invariant 00:04:23.978 [2024-07-10 13:28:03.260437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.245 [2024-07-10 13:28:03.351105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.245 13:28:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:24.245 13:28:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:24.245 13:28:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:24.245 13:28:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:24.245 13:28:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:24.245 13:28:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:24.245 13:28:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:24.245 13:28:03 -- accel/accel.sh@42 -- # jq -r . 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=0x1 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=decompress 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=software 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@23 -- # accel_module=software 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=32 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=32 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=2 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val=Yes 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:24.245 13:28:03 -- accel/accel.sh@21 -- # val= 00:04:24.245 13:28:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # IFS=: 00:04:24.245 13:28:03 -- accel/accel.sh@20 -- # read -r var val 00:04:25.193 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.193 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.193 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.193 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.193 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@21 -- # val= 00:04:25.194 13:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # IFS=: 00:04:25.194 13:28:04 -- accel/accel.sh@20 -- # read -r var val 00:04:25.194 13:28:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:25.194 13:28:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:25.194 13:28:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:25.194 00:04:25.194 real 0m3.461s 00:04:25.194 user 0m2.484s 00:04:25.194 sys 0m0.992s 00:04:25.194 13:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.194 13:28:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.194 ************************************ 00:04:25.194 END TEST accel_deomp_full_mthread 00:04:25.194 ************************************ 00:04:25.451 13:28:04 -- accel/accel.sh@116 -- # [[ n == y ]] 00:04:25.451 13:28:04 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.k77kEL 00:04:25.451 13:28:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:25.451 13:28:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.451 13:28:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.451 ************************************ 00:04:25.451 START TEST accel_dif_functional_tests 00:04:25.451 ************************************ 00:04:25.451 13:28:04 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.k77kEL 00:04:25.451 [2024-07-10 13:28:04.587332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:25.451 [2024-07-10 13:28:04.587497] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:25.709 EAL: TSC is not safe to use in SMP mode 00:04:25.709 EAL: TSC is not invariant 00:04:25.709 [2024-07-10 13:28:05.060054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:25.968 [2024-07-10 13:28:05.152286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.968 [2024-07-10 13:28:05.152196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.968 [2024-07-10 13:28:05.152289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.968 13:28:05 -- accel/accel.sh@129 -- # build_accel_config 00:04:25.968 13:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:25.968 13:28:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:25.968 13:28:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:25.968 13:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:25.968 13:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:25.968 13:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:25.968 13:28:05 -- accel/accel.sh@42 -- # jq -r . 00:04:25.968 00:04:25.968 00:04:25.968 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.968 http://cunit.sourceforge.net/ 00:04:25.968 00:04:25.968 00:04:25.968 Suite: accel_dif 00:04:25.968 Test: verify: DIF generated, GUARD check ...passed 00:04:25.968 Test: verify: DIF generated, APPTAG check ...passed 00:04:25.968 Test: verify: DIF generated, REFTAG check ...passed 00:04:25.968 Test: verify: DIF not generated, GUARD check ...passed 00:04:25.968 Test: verify: DIF not generated, APPTAG check ...passed 00:04:25.968 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 13:28:05.178788] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:25.968 [2024-07-10 13:28:05.178843] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:25.968 [2024-07-10 13:28:05.178870] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:25.968 [2024-07-10 13:28:05.178906] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:25.968 [2024-07-10 13:28:05.178918] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:25.968 [2024-07-10 13:28:05.178944] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:25.968 passed 00:04:25.968 Test: verify: APPTAG correct, APPTAG check ...passed 00:04:25.968 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:04:25.968 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:04:25.968 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:04:25.968 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:04:25.968 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 13:28:05.178970] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:04:25.968 passed 00:04:25.968 Test: generate copy: DIF generated, GUARD check ...passed 00:04:25.968 Test: generate copy: DIF generated, APTTAG check ...[2024-07-10 13:28:05.179068] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:04:25.968 passed 00:04:25.968 Test: generate copy: DIF generated, REFTAG check ...passed 00:04:25.968 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:04:25.968 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:04:25.968 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:04:25.968 Test: generate copy: iovecs-len validate ...[2024-07-10 13:28:05.179207] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:04:25.968 passed 00:04:25.968 Test: generate copy: buffer alignment validate ...passed 00:04:25.968 00:04:25.968 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.968 suites 1 1 n/a 0 0 00:04:25.968 tests 20 20 20 0 0 00:04:25.968 asserts 204 204 204 0 n/a 00:04:25.968 00:04:25.968 Elapsed time = 0.000 seconds 00:04:26.228 00:04:26.228 real 0m0.742s 00:04:26.228 user 0m0.343s 00:04:26.228 sys 0m0.543s 00:04:26.228 13:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.228 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 ************************************ 00:04:26.228 END TEST accel_dif_functional_tests 00:04:26.228 ************************************ 00:04:26.228 00:04:26.228 real 1m13.107s 00:04:26.228 user 1m3.698s 00:04:26.228 sys 0m23.037s 00:04:26.228 13:28:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:26.228 13:28:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:26.228 13:28:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:26.228 13:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:26.228 13:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:26.228 13:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.228 13:28:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:26.228 13:28:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 13:28:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:26.228 ************************************ 00:04:26.228 END TEST accel 00:04:26.228 ************************************ 00:04:26.228 13:28:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:26.228 13:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:26.228 13:28:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:26.228 13:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:26.228 13:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:26.228 13:28:05 -- accel/accel.sh@42 -- # jq -r . 00:04:26.228 13:28:05 -- accel/accel.sh@42 -- # jq -r . 00:04:26.228 13:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:26.228 13:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:26.228 13:28:05 -- accel/accel.sh@42 -- # jq -r . 00:04:26.228 13:28:05 -- spdk/autotest.sh@190 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:26.228 13:28:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.228 13:28:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.228 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 ************************************ 00:04:26.228 START TEST accel_rpc 00:04:26.228 ************************************ 00:04:26.228 13:28:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:26.488 * Looking for test storage... 00:04:26.488 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:26.488 13:28:05 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.488 13:28:05 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=46790 00:04:26.488 13:28:05 -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:04:26.488 13:28:05 -- accel/accel_rpc.sh@15 -- # waitforlisten 46790 00:04:26.488 13:28:05 -- common/autotest_common.sh@819 -- # '[' -z 46790 ']' 00:04:26.488 13:28:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.488 13:28:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:26.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.488 13:28:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.488 13:28:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:26.488 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.488 [2024-07-10 13:28:05.605173] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:26.488 [2024-07-10 13:28:05.605452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:26.747 EAL: TSC is not safe to use in SMP mode 00:04:26.747 EAL: TSC is not invariant 00:04:26.747 [2024-07-10 13:28:06.049727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.006 [2024-07-10 13:28:06.142569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.006 [2024-07-10 13:28:06.142668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.266 13:28:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:27.266 13:28:06 -- common/autotest_common.sh@852 -- # return 0 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:04:27.266 13:28:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.266 13:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.266 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.266 ************************************ 00:04:27.266 START TEST accel_assign_opcode 00:04:27.266 ************************************ 00:04:27.266 13:28:06 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:04:27.266 13:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.266 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.266 [2024-07-10 13:28:06.562928] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:04:27.266 13:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:04:27.266 13:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.266 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.266 [2024-07-10 13:28:06.570922] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:04:27.266 13:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:04:27.266 13:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.266 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.266 13:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:04:27.266 13:28:06 -- accel/accel_rpc.sh@42 -- # grep software 00:04:27.266 13:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.266 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.525 13:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.525 software 00:04:27.525 00:04:27.525 real 0m0.071s 00:04:27.525 user 0m0.006s 00:04:27.525 sys 0m0.021s 00:04:27.525 13:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.525 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.525 ************************************ 00:04:27.525 END TEST accel_assign_opcode 00:04:27.525 ************************************ 00:04:27.525 13:28:06 -- accel/accel_rpc.sh@55 -- # killprocess 46790 00:04:27.525 13:28:06 -- common/autotest_common.sh@926 -- # '[' -z 46790 ']' 00:04:27.525 13:28:06 -- common/autotest_common.sh@930 -- # kill -0 46790 00:04:27.525 13:28:06 -- common/autotest_common.sh@931 -- # uname 00:04:27.525 13:28:06 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:27.525 13:28:06 -- common/autotest_common.sh@934 -- # ps -c -o command 46790 00:04:27.525 13:28:06 -- common/autotest_common.sh@934 -- # tail -1 00:04:27.525 13:28:06 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:27.525 13:28:06 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:27.525 killing process with pid 46790 00:04:27.525 13:28:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46790' 00:04:27.525 13:28:06 -- common/autotest_common.sh@945 -- # kill 46790 00:04:27.525 13:28:06 -- common/autotest_common.sh@950 -- # wait 46790 00:04:27.784 00:04:27.784 real 0m1.484s 00:04:27.784 user 0m1.326s 00:04:27.784 sys 0m0.718s 00:04:27.784 13:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.784 ************************************ 00:04:27.784 END TEST accel_rpc 00:04:27.784 ************************************ 00:04:27.784 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.784 13:28:06 -- spdk/autotest.sh@191 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:27.784 13:28:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.784 13:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.784 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.784 ************************************ 00:04:27.784 START TEST app_cmdline 00:04:27.784 ************************************ 00:04:27.784 13:28:06 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:27.784 * Looking for test storage... 00:04:27.784 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:27.784 13:28:07 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:27.784 13:28:07 -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:27.784 13:28:07 -- app/cmdline.sh@17 -- # spdk_tgt_pid=46863 00:04:27.784 13:28:07 -- app/cmdline.sh@18 -- # waitforlisten 46863 00:04:27.784 13:28:07 -- common/autotest_common.sh@819 -- # '[' -z 46863 ']' 00:04:27.784 13:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.784 13:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.784 13:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.784 13:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:27.784 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:04:27.784 [2024-07-10 13:28:07.132579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:27.784 [2024-07-10 13:28:07.132722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:28.400 EAL: TSC is not safe to use in SMP mode 00:04:28.400 EAL: TSC is not invariant 00:04:28.400 [2024-07-10 13:28:07.577580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.400 [2024-07-10 13:28:07.666220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:28.400 [2024-07-10 13:28:07.666309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.982 13:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:28.982 13:28:08 -- common/autotest_common.sh@852 -- # return 0 00:04:28.982 13:28:08 -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:04:28.982 { 00:04:28.982 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:04:28.982 "fields": { 00:04:28.982 "major": 24, 00:04:28.982 "minor": 1, 00:04:28.982 "patch": 1, 00:04:28.982 "suffix": "-pre", 00:04:28.982 "commit": "4b94202c6" 00:04:28.982 } 00:04:28.982 } 00:04:28.982 13:28:08 -- app/cmdline.sh@22 -- # expected_methods=() 00:04:28.982 13:28:08 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:28.982 13:28:08 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:28.982 13:28:08 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:28.982 13:28:08 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:28.982 13:28:08 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:28.982 13:28:08 -- app/cmdline.sh@26 -- # sort 00:04:28.982 13:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:28.982 13:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:28.982 13:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:28.982 13:28:08 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:28.982 13:28:08 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:28.982 13:28:08 -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:28.982 13:28:08 -- common/autotest_common.sh@640 -- # local es=0 00:04:28.982 13:28:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:28.982 13:28:08 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:28.982 13:28:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:28.982 13:28:08 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:28.982 13:28:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:28.982 13:28:08 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:28.982 13:28:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:28.982 13:28:08 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:28.982 13:28:08 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:04:28.982 13:28:08 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:29.242 request: 00:04:29.242 { 00:04:29.242 "method": "env_dpdk_get_mem_stats", 00:04:29.242 "req_id": 1 00:04:29.242 } 00:04:29.242 Got JSON-RPC error response 00:04:29.242 response: 00:04:29.242 { 00:04:29.242 "code": -32601, 00:04:29.242 "message": "Method not found" 00:04:29.242 } 00:04:29.242 13:28:08 -- common/autotest_common.sh@643 -- # es=1 00:04:29.242 13:28:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:29.242 13:28:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:29.242 13:28:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:29.242 13:28:08 -- app/cmdline.sh@1 -- # killprocess 46863 00:04:29.242 13:28:08 -- common/autotest_common.sh@926 -- # '[' -z 46863 ']' 00:04:29.242 13:28:08 -- common/autotest_common.sh@930 -- # kill -0 46863 00:04:29.242 13:28:08 -- common/autotest_common.sh@931 -- # uname 00:04:29.242 13:28:08 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:29.242 13:28:08 -- common/autotest_common.sh@934 -- # tail -1 00:04:29.242 13:28:08 -- common/autotest_common.sh@934 -- # ps -c -o command 46863 00:04:29.242 13:28:08 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:29.242 13:28:08 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:29.242 killing process with pid 46863 00:04:29.242 13:28:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46863' 00:04:29.242 13:28:08 -- common/autotest_common.sh@945 -- # kill 46863 00:04:29.242 13:28:08 -- common/autotest_common.sh@950 -- # wait 46863 00:04:29.500 ************************************ 00:04:29.500 END TEST app_cmdline 00:04:29.500 ************************************ 00:04:29.500 00:04:29.500 real 0m1.782s 00:04:29.500 user 0m1.928s 00:04:29.500 sys 0m0.785s 00:04:29.500 13:28:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.500 13:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.500 13:28:08 -- spdk/autotest.sh@192 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:29.500 13:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.500 13:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.500 13:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.500 ************************************ 00:04:29.500 START TEST version 00:04:29.500 ************************************ 00:04:29.500 13:28:08 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:29.759 * Looking for test storage... 00:04:29.759 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:29.759 13:28:08 -- app/version.sh@17 -- # get_header_version major 00:04:29.759 13:28:08 -- app/version.sh@14 -- # cut -f2 00:04:29.759 13:28:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:29.759 13:28:08 -- app/version.sh@14 -- # tr -d '"' 00:04:29.759 13:28:08 -- app/version.sh@17 -- # major=24 00:04:29.759 13:28:08 -- app/version.sh@18 -- # get_header_version minor 00:04:29.759 13:28:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:29.759 13:28:08 -- app/version.sh@14 -- # cut -f2 00:04:29.759 13:28:08 -- app/version.sh@14 -- # tr -d '"' 00:04:29.759 13:28:08 -- app/version.sh@18 -- # minor=1 00:04:29.759 13:28:08 -- app/version.sh@19 -- # get_header_version patch 00:04:29.759 13:28:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:29.759 13:28:08 -- app/version.sh@14 -- # cut -f2 00:04:29.759 13:28:08 -- app/version.sh@14 -- # tr -d '"' 00:04:29.759 13:28:08 -- app/version.sh@19 -- # patch=1 00:04:29.759 13:28:08 -- app/version.sh@20 -- # get_header_version suffix 00:04:29.759 13:28:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:29.759 13:28:08 -- app/version.sh@14 -- # cut -f2 00:04:29.759 13:28:08 -- app/version.sh@14 -- # tr -d '"' 00:04:29.759 13:28:08 -- app/version.sh@20 -- # suffix=-pre 00:04:29.759 13:28:08 -- app/version.sh@22 -- # version=24.1 00:04:29.759 13:28:08 -- app/version.sh@25 -- # (( patch != 0 )) 00:04:29.759 13:28:08 -- app/version.sh@25 -- # version=24.1.1 00:04:29.759 13:28:08 -- app/version.sh@28 -- # version=24.1.1rc0 00:04:29.759 13:28:08 -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:04:29.759 13:28:08 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:29.759 13:28:09 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:04:29.759 13:28:09 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:04:29.759 00:04:29.759 real 0m0.253s 00:04:29.759 user 0m0.167s 00:04:29.759 sys 0m0.177s 00:04:29.759 13:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.759 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:04:29.759 ************************************ 00:04:29.759 END TEST version 00:04:29.759 ************************************ 00:04:29.759 13:28:09 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:04:29.760 13:28:09 -- spdk/autotest.sh@195 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:29.760 13:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.760 13:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.760 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 START TEST blockdev_general 00:04:29.760 ************************************ 00:04:29.760 13:28:09 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:30.019 * Looking for test storage... 00:04:30.019 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:04:30.019 13:28:09 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:30.019 13:28:09 -- bdev/nbd_common.sh@6 -- # set -e 00:04:30.019 13:28:09 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:04:30.019 13:28:09 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:04:30.019 13:28:09 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:04:30.019 13:28:09 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:04:30.019 13:28:09 -- bdev/blockdev.sh@18 -- # : 00:04:30.019 13:28:09 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:04:30.019 13:28:09 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:04:30.019 13:28:09 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:04:30.019 13:28:09 -- bdev/blockdev.sh@672 -- # uname -s 00:04:30.019 13:28:09 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:04:30.019 13:28:09 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:04:30.019 13:28:09 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:04:30.019 13:28:09 -- bdev/blockdev.sh@681 -- # crypto_device= 00:04:30.019 13:28:09 -- bdev/blockdev.sh@682 -- # dek= 00:04:30.019 13:28:09 -- bdev/blockdev.sh@683 -- # env_ctx= 00:04:30.019 13:28:09 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:04:30.019 13:28:09 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:04:30.019 13:28:09 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:04:30.019 13:28:09 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:04:30.019 13:28:09 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:04:30.019 13:28:09 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=46988 00:04:30.019 13:28:09 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:04:30.019 13:28:09 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:04:30.019 13:28:09 -- bdev/blockdev.sh@47 -- # waitforlisten 46988 00:04:30.019 13:28:09 -- common/autotest_common.sh@819 -- # '[' -z 46988 ']' 00:04:30.019 13:28:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.019 13:28:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:30.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.019 13:28:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.019 13:28:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:30.019 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 [2024-07-10 13:28:09.255932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:30.019 [2024-07-10 13:28:09.256107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:30.587 EAL: TSC is not safe to use in SMP mode 00:04:30.587 EAL: TSC is not invariant 00:04:30.587 [2024-07-10 13:28:09.681847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.587 [2024-07-10 13:28:09.757943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.587 [2024-07-10 13:28:09.758087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.154 13:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.154 13:28:10 -- common/autotest_common.sh@852 -- # return 0 00:04:31.154 13:28:10 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:04:31.154 13:28:10 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:04:31.154 13:28:10 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:04:31.154 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.154 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.154 [2024-07-10 13:28:10.259434] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:31.154 [2024-07-10 13:28:10.259501] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:31.154 00:04:31.154 [2024-07-10 13:28:10.267429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:31.154 [2024-07-10 13:28:10.267461] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:31.154 00:04:31.154 Malloc0 00:04:31.154 Malloc1 00:04:31.154 Malloc2 00:04:31.154 Malloc3 00:04:31.154 Malloc4 00:04:31.154 Malloc5 00:04:31.154 Malloc6 00:04:31.154 Malloc7 00:04:31.154 Malloc8 00:04:31.154 Malloc9 00:04:31.154 [2024-07-10 13:28:10.359442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:31.154 [2024-07-10 13:28:10.359488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.154 [2024-07-10 13:28:10.359518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ca60700 00:04:31.154 [2024-07-10 13:28:10.359526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.154 [2024-07-10 13:28:10.359929] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.154 [2024-07-10 13:28:10.359965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:31.154 TestPT 00:04:31.154 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.154 13:28:10 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:04:31.154 5000+0 records in 00:04:31.154 5000+0 records out 00:04:31.154 10240000 bytes transferred in 0.031059 secs (329695341 bytes/sec) 00:04:31.154 13:28:10 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:04:31.154 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.154 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.154 AIO0 00:04:31.154 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.154 13:28:10 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:04:31.154 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.154 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.154 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.154 13:28:10 -- bdev/blockdev.sh@738 -- # cat 00:04:31.154 13:28:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:04:31.154 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.154 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.154 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.154 13:28:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:04:31.154 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.154 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.414 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.414 13:28:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:04:31.414 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.414 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.414 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.414 13:28:10 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:04:31.414 13:28:10 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:04:31.414 13:28:10 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:04:31.414 13:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.414 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.414 13:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.414 13:28:10 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:04:31.414 13:28:10 -- bdev/blockdev.sh@747 -- # jq -r .name 00:04:31.415 13:28:10 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "402798b2-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "402798b2-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "128eec9c-aec9-f957-80b9-575dd096b742"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "128eec9c-aec9-f957-80b9-575dd096b742",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c624a236-9260-125d-b927-928d900500a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c624a236-9260-125d-b927-928d900500a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "51b6bdda-1544-1053-92f8-05ee09d8d432"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "51b6bdda-1544-1053-92f8-05ee09d8d432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c8595006-19dd-c05c-a9dc-e8413781aa18"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8595006-19dd-c05c-a9dc-e8413781aa18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f5eddae7-a250-4658-877f-773d0b45f792"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5eddae7-a250-4658-877f-773d0b45f792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "83112fa8-cf32-665c-a2b0-f7190ced1088"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "83112fa8-cf32-665c-a2b0-f7190ced1088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ecccb021-0f38-4f5e-936c-5bf39391ebf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecccb021-0f38-4f5e-936c-5bf39391ebf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2460acd2-fe00-9154-a4c0-2302f30cfe6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2460acd2-fe00-9154-a4c0-2302f30cfe6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9654b799-4e1b-8055-a2b3-a8971ed787aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9654b799-4e1b-8055-a2b3-a8971ed787aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0079bf30-c205-015e-9535-bd6a9004932c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0079bf30-c205-015e-9535-bd6a9004932c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "17bb3d85-def1-9952-b486-361e37b6a19b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17bb3d85-def1-9952-b486-361e37b6a19b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "40364c68-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "402d1649-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "402e4ec6-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4037781c-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "402f8746-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4030bfc4-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4038affa-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4031f82a-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "403330af-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "404275ad-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "404275ad-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:04:31.415 13:28:10 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:04:31.415 13:28:10 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:04:31.415 13:28:10 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:04:31.415 13:28:10 -- bdev/blockdev.sh@752 -- # killprocess 46988 00:04:31.415 13:28:10 -- common/autotest_common.sh@926 -- # '[' -z 46988 ']' 00:04:31.415 13:28:10 -- common/autotest_common.sh@930 -- # kill -0 46988 00:04:31.415 13:28:10 -- common/autotest_common.sh@931 -- # uname 00:04:31.415 13:28:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:31.415 13:28:10 -- common/autotest_common.sh@934 -- # ps -c -o command 46988 00:04:31.415 13:28:10 -- common/autotest_common.sh@934 -- # tail -1 00:04:31.415 13:28:10 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:31.415 13:28:10 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:31.415 killing process with pid 46988 00:04:31.415 13:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46988' 00:04:31.415 13:28:10 -- common/autotest_common.sh@945 -- # kill 46988 00:04:31.415 13:28:10 -- common/autotest_common.sh@950 -- # wait 46988 00:04:31.675 13:28:10 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:04:31.675 13:28:10 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:04:31.675 13:28:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:31.675 13:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.675 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:31.675 ************************************ 00:04:31.675 START TEST bdev_hello_world 00:04:31.675 ************************************ 00:04:31.675 13:28:10 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:04:31.675 [2024-07-10 13:28:10.971425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:31.675 [2024-07-10 13:28:10.971793] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:32.242 EAL: TSC is not safe to use in SMP mode 00:04:32.242 EAL: TSC is not invariant 00:04:32.242 [2024-07-10 13:28:11.409689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.242 [2024-07-10 13:28:11.499309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.242 [2024-07-10 13:28:11.555150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:32.242 [2024-07-10 13:28:11.555205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:32.242 [2024-07-10 13:28:11.563140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:32.242 [2024-07-10 13:28:11.563163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:32.242 [2024-07-10 13:28:11.571153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:32.242 [2024-07-10 13:28:11.571176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:32.242 [2024-07-10 13:28:11.571185] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:32.500 [2024-07-10 13:28:11.619162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:32.500 [2024-07-10 13:28:11.619217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.500 [2024-07-10 13:28:11.619233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcd1800 00:04:32.500 [2024-07-10 13:28:11.619242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.500 [2024-07-10 13:28:11.619556] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.500 [2024-07-10 13:28:11.619586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:32.500 [2024-07-10 13:28:11.720360] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:04:32.500 [2024-07-10 13:28:11.720418] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:04:32.500 [2024-07-10 13:28:11.720432] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:04:32.500 [2024-07-10 13:28:11.720447] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:04:32.500 [2024-07-10 13:28:11.720463] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:04:32.500 [2024-07-10 13:28:11.720482] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:04:32.500 [2024-07-10 13:28:11.720497] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:04:32.500 00:04:32.500 [2024-07-10 13:28:11.720510] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:04:32.759 00:04:32.759 real 0m0.955s 00:04:32.759 user 0m0.469s 00:04:32.759 sys 0m0.483s 00:04:32.759 ************************************ 00:04:32.759 END TEST bdev_hello_world 00:04:32.759 ************************************ 00:04:32.759 13:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.759 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.759 13:28:11 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:04:32.759 13:28:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:32.759 13:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.759 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.759 ************************************ 00:04:32.759 START TEST bdev_bounds 00:04:32.759 ************************************ 00:04:32.759 13:28:11 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:04:32.759 13:28:11 -- bdev/blockdev.sh@288 -- # bdevio_pid=47028 00:04:32.759 13:28:11 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.759 Process bdevio pid: 47028 00:04:32.759 13:28:11 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 47028' 00:04:32.759 13:28:11 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:04:32.759 13:28:11 -- bdev/blockdev.sh@291 -- # waitforlisten 47028 00:04:32.759 13:28:11 -- common/autotest_common.sh@819 -- # '[' -z 47028 ']' 00:04:32.759 13:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.759 13:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:32.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.759 13:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.759 13:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:32.759 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.759 [2024-07-10 13:28:11.977795] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:32.759 [2024-07-10 13:28:11.978151] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:33.326 EAL: TSC is not safe to use in SMP mode 00:04:33.326 EAL: TSC is not invariant 00:04:33.326 [2024-07-10 13:28:12.429540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:33.326 [2024-07-10 13:28:12.520985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.326 [2024-07-10 13:28:12.521173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.326 [2024-07-10 13:28:12.521172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.326 [2024-07-10 13:28:12.579665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:33.326 [2024-07-10 13:28:12.579742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:33.326 [2024-07-10 13:28:12.587639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:33.326 [2024-07-10 13:28:12.587677] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:33.326 [2024-07-10 13:28:12.595659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:33.326 [2024-07-10 13:28:12.595695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:33.327 [2024-07-10 13:28:12.595705] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:33.327 [2024-07-10 13:28:12.643665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:33.327 [2024-07-10 13:28:12.643734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.327 [2024-07-10 13:28:12.643750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7fe800 00:04:33.327 [2024-07-10 13:28:12.643760] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.327 [2024-07-10 13:28:12.644202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.327 [2024-07-10 13:28:12.644241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:33.585 13:28:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.585 13:28:12 -- common/autotest_common.sh@852 -- # return 0 00:04:33.585 13:28:12 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:04:33.843 I/O targets: 00:04:33.843 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:04:33.843 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:04:33.843 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:04:33.843 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:04:33.843 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:04:33.843 raid0: 131072 blocks of 512 bytes (64 MiB) 00:04:33.843 concat0: 131072 blocks of 512 bytes (64 MiB) 00:04:33.843 raid1: 65536 blocks of 512 bytes (32 MiB) 00:04:33.843 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:04:33.843 00:04:33.843 00:04:33.843 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.843 http://cunit.sourceforge.net/ 00:04:33.843 00:04:33.843 00:04:33.843 Suite: bdevio tests on: AIO0 00:04:33.843 Test: blockdev write read block ...passed 00:04:33.843 Test: blockdev write zeroes read block ...passed 00:04:33.843 Test: blockdev write zeroes read no split ...passed 00:04:33.843 Test: blockdev write zeroes read split ...passed 00:04:33.843 Test: blockdev write zeroes read split partial ...passed 00:04:33.843 Test: blockdev reset ...passed 00:04:33.843 Test: blockdev write read 8 blocks ...passed 00:04:33.843 Test: blockdev write read size > 128k ...passed 00:04:33.843 Test: blockdev write read invalid size ...passed 00:04:33.843 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:33.843 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:33.843 Test: blockdev write read max offset ...passed 00:04:33.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:33.843 Test: blockdev writev readv 8 blocks ...passed 00:04:33.843 Test: blockdev writev readv 30 x 1block ...passed 00:04:33.843 Test: blockdev writev readv block ...passed 00:04:33.843 Test: blockdev writev readv size > 128k ...passed 00:04:33.843 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:33.843 Test: blockdev comparev and writev ...passed 00:04:33.843 Test: blockdev nvme passthru rw ...passed 00:04:33.843 Test: blockdev nvme passthru vendor specific ...passed 00:04:33.843 Test: blockdev nvme admin passthru ...passed 00:04:33.843 Test: blockdev copy ...passed 00:04:33.843 Suite: bdevio tests on: raid1 00:04:33.843 Test: blockdev write read block ...passed 00:04:33.843 Test: blockdev write zeroes read block ...passed 00:04:33.843 Test: blockdev write zeroes read no split ...passed 00:04:33.843 Test: blockdev write zeroes read split ...passed 00:04:33.843 Test: blockdev write zeroes read split partial ...passed 00:04:33.843 Test: blockdev reset ...passed 00:04:33.843 Test: blockdev write read 8 blocks ...passed 00:04:33.843 Test: blockdev write read size > 128k ...passed 00:04:33.843 Test: blockdev write read invalid size ...passed 00:04:33.843 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:33.843 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:33.843 Test: blockdev write read max offset ...passed 00:04:33.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:33.843 Test: blockdev writev readv 8 blocks ...passed 00:04:33.843 Test: blockdev writev readv 30 x 1block ...passed 00:04:33.843 Test: blockdev writev readv block ...passed 00:04:33.843 Test: blockdev writev readv size > 128k ...passed 00:04:33.843 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:33.843 Test: blockdev comparev and writev ...passed 00:04:33.843 Test: blockdev nvme passthru rw ...passed 00:04:33.843 Test: blockdev nvme passthru vendor specific ...passed 00:04:33.843 Test: blockdev nvme admin passthru ...passed 00:04:33.843 Test: blockdev copy ...passed 00:04:33.843 Suite: bdevio tests on: concat0 00:04:33.843 Test: blockdev write read block ...passed 00:04:33.843 Test: blockdev write zeroes read block ...passed 00:04:33.843 Test: blockdev write zeroes read no split ...passed 00:04:33.844 Test: blockdev write zeroes read split ...passed 00:04:33.844 Test: blockdev write zeroes read split partial ...passed 00:04:33.844 Test: blockdev reset ...passed 00:04:33.844 Test: blockdev write read 8 blocks ...passed 00:04:33.844 Test: blockdev write read size > 128k ...passed 00:04:33.844 Test: blockdev write read invalid size ...passed 00:04:33.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:33.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:33.844 Test: blockdev write read max offset ...passed 00:04:33.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:33.844 Test: blockdev writev readv 8 blocks ...passed 00:04:33.844 Test: blockdev writev readv 30 x 1block ...passed 00:04:33.844 Test: blockdev writev readv block ...passed 00:04:33.844 Test: blockdev writev readv size > 128k ...passed 00:04:33.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:33.844 Test: blockdev comparev and writev ...passed 00:04:33.844 Test: blockdev nvme passthru rw ...passed 00:04:33.844 Test: blockdev nvme passthru vendor specific ...passed 00:04:33.844 Test: blockdev nvme admin passthru ...passed 00:04:33.844 Test: blockdev copy ...passed 00:04:33.844 Suite: bdevio tests on: raid0 00:04:33.844 Test: blockdev write read block ...passed 00:04:33.844 Test: blockdev write zeroes read block ...passed 00:04:33.844 Test: blockdev write zeroes read no split ...passed 00:04:33.844 Test: blockdev write zeroes read split ...passed 00:04:33.844 Test: blockdev write zeroes read split partial ...passed 00:04:33.844 Test: blockdev reset ...passed 00:04:33.844 Test: blockdev write read 8 blocks ...passed 00:04:33.844 Test: blockdev write read size > 128k ...passed 00:04:33.844 Test: blockdev write read invalid size ...passed 00:04:33.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:33.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:33.844 Test: blockdev write read max offset ...passed 00:04:33.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:33.844 Test: blockdev writev readv 8 blocks ...passed 00:04:33.844 Test: blockdev writev readv 30 x 1block ...passed 00:04:33.844 Test: blockdev writev readv block ...passed 00:04:33.844 Test: blockdev writev readv size > 128k ...passed 00:04:33.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:33.844 Test: blockdev comparev and writev ...passed 00:04:33.844 Test: blockdev nvme passthru rw ...passed 00:04:33.844 Test: blockdev nvme passthru vendor specific ...passed 00:04:33.844 Test: blockdev nvme admin passthru ...passed 00:04:33.844 Test: blockdev copy ...passed 00:04:33.844 Suite: bdevio tests on: TestPT 00:04:33.844 Test: blockdev write read block ...passed 00:04:33.844 Test: blockdev write zeroes read block ...passed 00:04:33.844 Test: blockdev write zeroes read no split ...passed 00:04:33.844 Test: blockdev write zeroes read split ...passed 00:04:33.844 Test: blockdev write zeroes read split partial ...passed 00:04:33.844 Test: blockdev reset ...passed 00:04:34.102 Test: blockdev write read 8 blocks ...passed 00:04:34.102 Test: blockdev write read size > 128k ...passed 00:04:34.102 Test: blockdev write read invalid size ...passed 00:04:34.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.102 Test: blockdev write read max offset ...passed 00:04:34.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.102 Test: blockdev writev readv 8 blocks ...passed 00:04:34.102 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.102 Test: blockdev writev readv block ...passed 00:04:34.102 Test: blockdev writev readv size > 128k ...passed 00:04:34.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.102 Test: blockdev comparev and writev ...passed 00:04:34.102 Test: blockdev nvme passthru rw ...passed 00:04:34.102 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.102 Test: blockdev nvme admin passthru ...passed 00:04:34.102 Test: blockdev copy ...passed 00:04:34.102 Suite: bdevio tests on: Malloc2p7 00:04:34.102 Test: blockdev write read block ...passed 00:04:34.102 Test: blockdev write zeroes read block ...passed 00:04:34.102 Test: blockdev write zeroes read no split ...passed 00:04:34.102 Test: blockdev write zeroes read split ...passed 00:04:34.102 Test: blockdev write zeroes read split partial ...passed 00:04:34.102 Test: blockdev reset ...passed 00:04:34.102 Test: blockdev write read 8 blocks ...passed 00:04:34.102 Test: blockdev write read size > 128k ...passed 00:04:34.102 Test: blockdev write read invalid size ...passed 00:04:34.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.102 Test: blockdev write read max offset ...passed 00:04:34.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.102 Test: blockdev writev readv 8 blocks ...passed 00:04:34.102 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.102 Test: blockdev writev readv block ...passed 00:04:34.102 Test: blockdev writev readv size > 128k ...passed 00:04:34.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.102 Test: blockdev comparev and writev ...passed 00:04:34.102 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p6 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p5 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p4 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p3 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p2 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p1 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc2p0 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.103 Test: blockdev comparev and writev ...passed 00:04:34.103 Test: blockdev nvme passthru rw ...passed 00:04:34.103 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.103 Test: blockdev nvme admin passthru ...passed 00:04:34.103 Test: blockdev copy ...passed 00:04:34.103 Suite: bdevio tests on: Malloc1p1 00:04:34.103 Test: blockdev write read block ...passed 00:04:34.103 Test: blockdev write zeroes read block ...passed 00:04:34.103 Test: blockdev write zeroes read no split ...passed 00:04:34.103 Test: blockdev write zeroes read split ...passed 00:04:34.103 Test: blockdev write zeroes read split partial ...passed 00:04:34.103 Test: blockdev reset ...passed 00:04:34.103 Test: blockdev write read 8 blocks ...passed 00:04:34.103 Test: blockdev write read size > 128k ...passed 00:04:34.103 Test: blockdev write read invalid size ...passed 00:04:34.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.103 Test: blockdev write read max offset ...passed 00:04:34.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.103 Test: blockdev writev readv 8 blocks ...passed 00:04:34.103 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.103 Test: blockdev writev readv block ...passed 00:04:34.103 Test: blockdev writev readv size > 128k ...passed 00:04:34.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.104 Test: blockdev comparev and writev ...passed 00:04:34.104 Test: blockdev nvme passthru rw ...passed 00:04:34.104 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.104 Test: blockdev nvme admin passthru ...passed 00:04:34.104 Test: blockdev copy ...passed 00:04:34.104 Suite: bdevio tests on: Malloc1p0 00:04:34.104 Test: blockdev write read block ...passed 00:04:34.104 Test: blockdev write zeroes read block ...passed 00:04:34.104 Test: blockdev write zeroes read no split ...passed 00:04:34.104 Test: blockdev write zeroes read split ...passed 00:04:34.104 Test: blockdev write zeroes read split partial ...passed 00:04:34.104 Test: blockdev reset ...passed 00:04:34.104 Test: blockdev write read 8 blocks ...passed 00:04:34.104 Test: blockdev write read size > 128k ...passed 00:04:34.104 Test: blockdev write read invalid size ...passed 00:04:34.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.104 Test: blockdev write read max offset ...passed 00:04:34.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.104 Test: blockdev writev readv 8 blocks ...passed 00:04:34.104 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.104 Test: blockdev writev readv block ...passed 00:04:34.104 Test: blockdev writev readv size > 128k ...passed 00:04:34.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.104 Test: blockdev comparev and writev ...passed 00:04:34.104 Test: blockdev nvme passthru rw ...passed 00:04:34.104 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.104 Test: blockdev nvme admin passthru ...passed 00:04:34.104 Test: blockdev copy ...passed 00:04:34.104 Suite: bdevio tests on: Malloc0 00:04:34.104 Test: blockdev write read block ...passed 00:04:34.104 Test: blockdev write zeroes read block ...passed 00:04:34.104 Test: blockdev write zeroes read no split ...passed 00:04:34.104 Test: blockdev write zeroes read split ...passed 00:04:34.104 Test: blockdev write zeroes read split partial ...passed 00:04:34.104 Test: blockdev reset ...passed 00:04:34.104 Test: blockdev write read 8 blocks ...passed 00:04:34.104 Test: blockdev write read size > 128k ...passed 00:04:34.104 Test: blockdev write read invalid size ...passed 00:04:34.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:34.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:34.104 Test: blockdev write read max offset ...passed 00:04:34.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:34.104 Test: blockdev writev readv 8 blocks ...passed 00:04:34.104 Test: blockdev writev readv 30 x 1block ...passed 00:04:34.104 Test: blockdev writev readv block ...passed 00:04:34.104 Test: blockdev writev readv size > 128k ...passed 00:04:34.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:34.104 Test: blockdev comparev and writev ...passed 00:04:34.104 Test: blockdev nvme passthru rw ...passed 00:04:34.104 Test: blockdev nvme passthru vendor specific ...passed 00:04:34.104 Test: blockdev nvme admin passthru ...passed 00:04:34.104 Test: blockdev copy ...passed 00:04:34.104 00:04:34.104 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.104 suites 16 16 n/a 0 0 00:04:34.104 tests 368 368 368 0 0 00:04:34.104 asserts 2224 2224 2224 0 n/a 00:04:34.104 00:04:34.104 Elapsed time = 0.570 seconds 00:04:34.104 0 00:04:34.104 13:28:13 -- bdev/blockdev.sh@293 -- # killprocess 47028 00:04:34.104 13:28:13 -- common/autotest_common.sh@926 -- # '[' -z 47028 ']' 00:04:34.104 13:28:13 -- common/autotest_common.sh@930 -- # kill -0 47028 00:04:34.104 13:28:13 -- common/autotest_common.sh@931 -- # uname 00:04:34.104 13:28:13 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:34.104 13:28:13 -- common/autotest_common.sh@934 -- # ps -c -o command 47028 00:04:34.104 13:28:13 -- common/autotest_common.sh@934 -- # tail -1 00:04:34.104 13:28:13 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:04:34.104 13:28:13 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:04:34.104 killing process with pid 47028 00:04:34.104 13:28:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47028' 00:04:34.104 13:28:13 -- common/autotest_common.sh@945 -- # kill 47028 00:04:34.104 13:28:13 -- common/autotest_common.sh@950 -- # wait 47028 00:04:34.362 13:28:13 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:04:34.362 00:04:34.362 real 0m1.576s 00:04:34.362 user 0m3.157s 00:04:34.362 sys 0m0.646s 00:04:34.362 13:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.362 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.362 ************************************ 00:04:34.362 END TEST bdev_bounds 00:04:34.363 ************************************ 00:04:34.363 13:28:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.363 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.363 ************************************ 00:04:34.363 START TEST bdev_nbd 00:04:34.363 ************************************ 00:04:34.363 13:28:13 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:04:34.363 13:28:13 -- bdev/blockdev.sh@298 -- # uname -s 00:04:34.363 13:28:13 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:04:34.363 13:28:13 -- bdev/blockdev.sh@298 -- # return 0 00:04:34.363 00:04:34.363 real 0m0.007s 00:04:34.363 user 0m0.000s 00:04:34.363 sys 0m0.011s 00:04:34.363 13:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.363 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.363 ************************************ 00:04:34.363 END TEST bdev_nbd 00:04:34.363 ************************************ 00:04:34.363 13:28:13 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:04:34.363 13:28:13 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:04:34.363 13:28:13 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:04:34.363 13:28:13 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.363 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.363 ************************************ 00:04:34.363 START TEST bdev_fio 00:04:34.363 ************************************ 00:04:34.363 13:28:13 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:04:34.363 13:28:13 -- bdev/blockdev.sh@329 -- # local env_context 00:04:34.363 13:28:13 -- bdev/blockdev.sh@333 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:04:34.363 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:04:34.363 13:28:13 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:04:34.363 13:28:13 -- bdev/blockdev.sh@337 -- # echo '' 00:04:34.363 13:28:13 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:04:34.363 13:28:13 -- bdev/blockdev.sh@337 -- # env_context= 00:04:34.363 13:28:13 -- bdev/blockdev.sh@338 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:34.363 13:28:13 -- common/autotest_common.sh@1260 -- # local workload=verify 00:04:34.363 13:28:13 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:04:34.363 13:28:13 -- common/autotest_common.sh@1262 -- # local env_context= 00:04:34.363 13:28:13 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:04:34.363 13:28:13 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:34.363 13:28:13 -- common/autotest_common.sh@1280 -- # cat 00:04:34.363 13:28:13 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1293 -- # cat 00:04:34.363 13:28:13 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:04:34.363 13:28:13 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:04:34.932 13:28:14 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:04:34.932 13:28:14 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:04:34.932 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.932 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:04:34.932 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:04:34.932 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.932 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:04:34.932 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:04:34.932 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.932 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:04:34.932 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:04:34.932 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.932 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:04:34.932 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:04:34.933 13:28:14 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:34.933 13:28:14 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:04:34.933 13:28:14 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:04:34.933 13:28:14 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:34.933 13:28:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:34.933 13:28:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.933 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:34.933 ************************************ 00:04:34.933 START TEST bdev_fio_rw_verify 00:04:34.933 ************************************ 00:04:34.933 13:28:14 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:34.933 13:28:14 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:34.933 13:28:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:04:34.933 13:28:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:04:34.933 13:28:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:04:34.933 13:28:14 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:34.933 13:28:14 -- common/autotest_common.sh@1320 -- # shift 00:04:34.933 13:28:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:04:34.933 13:28:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:34.933 13:28:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:34.933 13:28:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:34.933 13:28:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:34.933 13:28:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:34.933 13:28:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:04:34.933 13:28:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:34.933 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:34.933 fio-3.35 00:04:34.933 Starting 16 threads 00:04:35.502 EAL: TSC is not safe to use in SMP mode 00:04:35.502 EAL: TSC is not invariant 00:04:47.722 00:04:47.722 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102656: Wed Jul 10 13:28:25 2024 00:04:47.722 read: IOPS=261k, BW=1018MiB/s (1068MB/s)(9.95GiB/10005msec) 00:04:47.722 slat (nsec): min=225, max=940747k, avg=3621.62, stdev=714796.13 00:04:47.722 clat (nsec): min=663, max=942748k, avg=48743.44, stdev=2203618.86 00:04:47.722 lat (nsec): min=1703, max=942761k, avg=52365.05, stdev=2316693.24 00:04:47.722 clat percentiles (usec): 00:04:47.722 | 50.000th=[ 9], 99.000th=[ 750], 99.900th=[ 979], 00:04:47.722 | 99.990th=[ 94897], 99.999th=[256902] 00:04:47.722 write: IOPS=430k, BW=1680MiB/s (1762MB/s)(16.2GiB/9895msec); 0 zone resets 00:04:47.722 slat (nsec): min=633, max=615342k, avg=19250.33, stdev=890561.92 00:04:47.722 clat (nsec): min=713, max=615429k, avg=93675.33, stdev=1952121.28 00:04:47.722 lat (usec): min=10, max=615440, avg=112.93, stdev=2149.64 00:04:47.722 clat percentiles (usec): 00:04:47.722 | 50.000th=[ 44], 99.000th=[ 734], 99.900th=[ 2376], 00:04:47.722 | 99.990th=[ 94897], 99.999th=[175113] 00:04:47.722 bw ( MiB/s): min= 622, max= 2743, per=98.89%, avg=1661.90, stdev=42.05, samples=297 00:04:47.722 iops : min=159468, max=702232, avg=425443.09, stdev=10765.58, samples=297 00:04:47.722 lat (nsec) : 750=0.01%, 1000=0.01% 00:04:47.722 lat (usec) : 2=0.06%, 4=12.59%, 10=18.05%, 20=20.93%, 50=20.68% 00:04:47.722 lat (usec) : 100=25.16%, 250=0.91%, 500=0.10%, 750=0.58%, 1000=0.79% 00:04:47.722 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.02% 00:04:47.722 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:04:47.722 cpu : usr=55.92%, sys=3.23%, ctx=836874, majf=0, minf=646 00:04:47.722 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:04:47.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:04:47.722 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:04:47.722 issued rwts: total=2607959,4256883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:04:47.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:04:47.722 00:04:47.722 Run status group 0 (all jobs): 00:04:47.722 READ: bw=1018MiB/s (1068MB/s), 1018MiB/s-1018MiB/s (1068MB/s-1068MB/s), io=9.95GiB (10.7GB), run=10005-10005msec 00:04:47.722 WRITE: bw=1680MiB/s (1762MB/s), 1680MiB/s-1680MiB/s (1762MB/s-1762MB/s), io=16.2GiB (17.4GB), run=9895-9895msec 00:04:47.722 00:04:47.722 real 0m11.836s 00:04:47.722 user 1m33.196s 00:04:47.722 sys 0m7.185s 00:04:47.722 13:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.722 13:28:25 -- common/autotest_common.sh@10 -- # set +x 00:04:47.722 ************************************ 00:04:47.722 END TEST bdev_fio_rw_verify 00:04:47.722 ************************************ 00:04:47.722 13:28:25 -- bdev/blockdev.sh@348 -- # rm -f 00:04:47.722 13:28:25 -- bdev/blockdev.sh@349 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:47.722 13:28:25 -- bdev/blockdev.sh@352 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:47.722 13:28:25 -- common/autotest_common.sh@1260 -- # local workload=trim 00:04:47.722 13:28:25 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:04:47.722 13:28:25 -- common/autotest_common.sh@1262 -- # local env_context= 00:04:47.722 13:28:25 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:04:47.722 13:28:25 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:47.722 13:28:25 -- common/autotest_common.sh@1280 -- # cat 00:04:47.722 13:28:25 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:04:47.722 13:28:25 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:04:47.722 13:28:25 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:04:47.723 13:28:25 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "402798b2-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "402798b2-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "128eec9c-aec9-f957-80b9-575dd096b742"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "128eec9c-aec9-f957-80b9-575dd096b742",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c624a236-9260-125d-b927-928d900500a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c624a236-9260-125d-b927-928d900500a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "51b6bdda-1544-1053-92f8-05ee09d8d432"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "51b6bdda-1544-1053-92f8-05ee09d8d432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c8595006-19dd-c05c-a9dc-e8413781aa18"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8595006-19dd-c05c-a9dc-e8413781aa18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f5eddae7-a250-4658-877f-773d0b45f792"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5eddae7-a250-4658-877f-773d0b45f792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "83112fa8-cf32-665c-a2b0-f7190ced1088"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "83112fa8-cf32-665c-a2b0-f7190ced1088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ecccb021-0f38-4f5e-936c-5bf39391ebf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecccb021-0f38-4f5e-936c-5bf39391ebf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2460acd2-fe00-9154-a4c0-2302f30cfe6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2460acd2-fe00-9154-a4c0-2302f30cfe6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9654b799-4e1b-8055-a2b3-a8971ed787aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9654b799-4e1b-8055-a2b3-a8971ed787aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0079bf30-c205-015e-9535-bd6a9004932c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0079bf30-c205-015e-9535-bd6a9004932c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "17bb3d85-def1-9952-b486-361e37b6a19b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17bb3d85-def1-9952-b486-361e37b6a19b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "40364c68-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "402d1649-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "402e4ec6-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4037781c-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "402f8746-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4030bfc4-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4038affa-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4031f82a-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "403330af-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "404275ad-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "404275ad-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:04:47.723 13:28:26 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:04:47.723 Malloc1p0 00:04:47.723 Malloc1p1 00:04:47.723 Malloc2p0 00:04:47.723 Malloc2p1 00:04:47.723 Malloc2p2 00:04:47.723 Malloc2p3 00:04:47.723 Malloc2p4 00:04:47.723 Malloc2p5 00:04:47.723 Malloc2p6 00:04:47.723 Malloc2p7 00:04:47.723 TestPT 00:04:47.723 raid0 00:04:47.723 concat0 ]] 00:04:47.723 13:28:26 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "402798b2-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "402798b2-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "128eec9c-aec9-f957-80b9-575dd096b742"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "128eec9c-aec9-f957-80b9-575dd096b742",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c624a236-9260-125d-b927-928d900500a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c624a236-9260-125d-b927-928d900500a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "51b6bdda-1544-1053-92f8-05ee09d8d432"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "51b6bdda-1544-1053-92f8-05ee09d8d432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c8595006-19dd-c05c-a9dc-e8413781aa18"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8595006-19dd-c05c-a9dc-e8413781aa18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f5eddae7-a250-4658-877f-773d0b45f792"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5eddae7-a250-4658-877f-773d0b45f792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "83112fa8-cf32-665c-a2b0-f7190ced1088"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "83112fa8-cf32-665c-a2b0-f7190ced1088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ecccb021-0f38-4f5e-936c-5bf39391ebf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecccb021-0f38-4f5e-936c-5bf39391ebf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2460acd2-fe00-9154-a4c0-2302f30cfe6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2460acd2-fe00-9154-a4c0-2302f30cfe6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9654b799-4e1b-8055-a2b3-a8971ed787aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9654b799-4e1b-8055-a2b3-a8971ed787aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0079bf30-c205-015e-9535-bd6a9004932c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0079bf30-c205-015e-9535-bd6a9004932c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "17bb3d85-def1-9952-b486-361e37b6a19b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17bb3d85-def1-9952-b486-361e37b6a19b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "40364c68-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "40364c68-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "402d1649-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "402e4ec6-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4037781c-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4037781c-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "402f8746-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4030bfc4-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4038affa-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4038affa-3ec0-11ef-b9c4-5b09e08d4792",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4031f82a-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "403330af-3ec0-11ef-b9c4-5b09e08d4792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "404275ad-3ec0-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "404275ad-3ec0-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:04:47.724 13:28:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:04:47.724 13:28:26 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:04:47.724 13:28:26 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:04:47.724 13:28:26 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:47.724 13:28:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:47.724 13:28:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.724 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:47.724 ************************************ 00:04:47.724 START TEST bdev_fio_trim 00:04:47.724 ************************************ 00:04:47.724 13:28:26 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:47.725 13:28:26 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:47.725 13:28:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:04:47.725 13:28:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:04:47.725 13:28:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:04:47.725 13:28:26 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:47.725 13:28:26 -- common/autotest_common.sh@1320 -- # shift 00:04:47.725 13:28:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:04:47.725 13:28:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:47.725 13:28:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:47.725 13:28:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:47.725 13:28:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:47.725 13:28:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:47.725 13:28:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:04:47.725 13:28:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:47.725 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:47.725 fio-3.35 00:04:47.725 Starting 14 threads 00:04:47.725 EAL: TSC is not safe to use in SMP mode 00:04:47.725 EAL: TSC is not invariant 00:04:59.929 00:04:59.929 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102675: Wed Jul 10 13:28:37 2024 00:04:59.929 write: IOPS=2480k, BW=9689MiB/s (10.2GB/s)(94.6GiB/10002msec); 0 zone resets 00:04:59.929 slat (nsec): min=213, max=755925k, avg=1249.20, stdev=233991.01 00:04:59.929 clat (nsec): min=1169, max=1512.3M, avg=15377.87, stdev=1416074.69 00:04:59.929 lat (nsec): min=1691, max=1512.3M, avg=16627.07, stdev=1435274.72 00:04:59.929 clat percentiles (usec): 00:04:59.929 | 50.000th=[ 7], 99.000th=[ 16], 99.900th=[ 955], 99.990th=[ 971], 00:04:59.929 | 99.999th=[94897] 00:04:59.929 bw ( MiB/s): min= 3605, max=14954, per=100.00%, avg=9867.30, stdev=278.74, samples=258 00:04:59.929 iops : min=922950, max=3828368, avg=2526025.21, stdev=71357.72, samples=258 00:04:59.929 trim: IOPS=2480k, BW=9689MiB/s (10.2GB/s)(94.6GiB/10002msec); 0 zone resets 00:04:59.929 slat (nsec): min=440, max=1356.7M, avg=1678.67, stdev=339038.24 00:04:59.929 clat (nsec): min=323, max=881922k, avg=10841.37, stdev=701553.45 00:04:59.929 lat (nsec): min=1430, max=1356.7M, avg=12520.04, stdev=782210.69 00:04:59.929 clat percentiles (usec): 00:04:59.929 | 50.000th=[ 8], 99.000th=[ 15], 99.900th=[ 28], 99.990th=[ 40], 00:04:59.929 | 99.999th=[94897] 00:04:59.929 bw ( MiB/s): min= 3605, max=14954, per=100.00%, avg=9867.31, stdev=278.74, samples=258 00:04:59.929 iops : min=922950, max=3828385, avg=2526027.32, stdev=71357.73, samples=258 00:04:59.929 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:04:59.929 lat (usec) : 2=0.10%, 4=23.67%, 10=58.75%, 20=16.95%, 50=0.30% 00:04:59.929 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.18% 00:04:59.929 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:04:59.929 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:04:59.929 lat (msec) : 2000=0.01% 00:04:59.929 cpu : usr=63.66%, sys=4.78%, ctx=1185343, majf=0, minf=0 00:04:59.929 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:04:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:04:59.929 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:04:59.929 issued rwts: total=0,24807784,24807788,0 short=0,0,0,0 dropped=0,0,0,0 00:04:59.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:04:59.929 00:04:59.929 Run status group 0 (all jobs): 00:04:59.929 WRITE: bw=9689MiB/s (10.2GB/s), 9689MiB/s-9689MiB/s (10.2GB/s-10.2GB/s), io=94.6GiB (102GB), run=10002-10002msec 00:04:59.929 TRIM: bw=9689MiB/s (10.2GB/s), 9689MiB/s-9689MiB/s (10.2GB/s-10.2GB/s), io=94.6GiB (102GB), run=10002-10002msec 00:04:59.929 00:04:59.929 real 0m11.793s 00:04:59.929 user 1m33.743s 00:04:59.929 sys 0m9.187s 00:04:59.929 13:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.929 13:28:37 -- common/autotest_common.sh@10 -- # set +x 00:04:59.929 ************************************ 00:04:59.929 END TEST bdev_fio_trim 00:04:59.929 ************************************ 00:04:59.929 13:28:37 -- bdev/blockdev.sh@366 -- # rm -f 00:04:59.929 13:28:37 -- bdev/blockdev.sh@367 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:59.929 /usr/home/vagrant/spdk_repo/spdk 00:04:59.929 13:28:37 -- bdev/blockdev.sh@368 -- # popd 00:04:59.929 13:28:37 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:04:59.929 00:04:59.929 real 0m24.228s 00:04:59.929 user 3m7.116s 00:04:59.929 sys 0m16.770s 00:04:59.929 13:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.929 13:28:37 -- common/autotest_common.sh@10 -- # set +x 00:04:59.929 ************************************ 00:04:59.929 END TEST bdev_fio 00:04:59.929 ************************************ 00:04:59.929 13:28:37 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:04:59.929 13:28:37 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:04:59.929 13:28:37 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:04:59.929 13:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.929 13:28:37 -- common/autotest_common.sh@10 -- # set +x 00:04:59.929 ************************************ 00:04:59.929 START TEST bdev_verify 00:04:59.929 ************************************ 00:04:59.929 13:28:37 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:04:59.929 [2024-07-10 13:28:37.945153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:59.929 [2024-07-10 13:28:37.945537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:59.929 EAL: TSC is not safe to use in SMP mode 00:04:59.929 EAL: TSC is not invariant 00:04:59.929 [2024-07-10 13:28:38.374909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.929 [2024-07-10 13:28:38.464327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.929 [2024-07-10 13:28:38.464329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.929 [2024-07-10 13:28:38.518649] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:59.929 [2024-07-10 13:28:38.518685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:59.929 [2024-07-10 13:28:38.526624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:59.929 [2024-07-10 13:28:38.526646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:59.929 [2024-07-10 13:28:38.534637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:59.929 [2024-07-10 13:28:38.534657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:59.929 [2024-07-10 13:28:38.534663] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:59.929 [2024-07-10 13:28:38.582643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:59.929 [2024-07-10 13:28:38.582703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.929 [2024-07-10 13:28:38.582715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b61c800 00:04:59.929 [2024-07-10 13:28:38.582720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.929 [2024-07-10 13:28:38.583084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.929 [2024-07-10 13:28:38.583106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:59.929 Running I/O for 5 seconds... 00:05:05.234 00:05:05.234 Latency(us) 00:05:05.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:05.234 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x1000 00:05:05.234 Malloc0 : 5.02 12600.99 49.22 0.00 0.00 10138.68 209.74 20792.29 00:05:05.234 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x1000 length 0x1000 00:05:05.234 Malloc0 : 5.02 28.49 0.11 0.00 0.00 4490001.68 267.76 5030362.53 00:05:05.234 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x800 00:05:05.234 Malloc1p0 : 5.02 10436.87 40.77 0.00 0.00 12246.43 358.80 12452.52 00:05:05.234 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x800 length 0x800 00:05:05.234 Malloc1p0 : 5.02 11758.30 45.93 0.00 0.00 10875.38 355.23 12338.28 00:05:05.234 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x800 00:05:05.234 Malloc1p1 : 5.02 10436.60 40.77 0.00 0.00 12244.91 319.52 12509.65 00:05:05.234 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x800 length 0x800 00:05:05.234 Malloc1p1 : 5.02 11757.99 45.93 0.00 0.00 10873.70 326.66 12338.28 00:05:05.234 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x200 00:05:05.234 Malloc2p0 : 5.02 10436.36 40.77 0.00 0.00 12243.85 321.31 12566.77 00:05:05.234 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x200 length 0x200 00:05:05.234 Malloc2p0 : 5.02 11757.71 45.93 0.00 0.00 10872.62 321.31 12338.28 00:05:05.234 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x200 00:05:05.234 Malloc2p1 : 5.02 10449.14 40.82 0.00 0.00 12233.86 321.31 12566.77 00:05:05.234 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x200 length 0x200 00:05:05.234 Malloc2p1 : 5.02 11757.45 45.93 0.00 0.00 10871.95 317.74 12338.28 00:05:05.234 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.234 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p2 : 5.02 10448.88 40.82 0.00 0.00 12233.01 321.31 12566.77 00:05:05.235 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p2 : 5.02 11757.19 45.93 0.00 0.00 10870.15 328.45 12338.28 00:05:05.235 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p3 : 5.02 10448.61 40.81 0.00 0.00 12231.53 324.88 12452.52 00:05:05.235 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p3 : 5.02 11756.93 45.93 0.00 0.00 10869.18 339.16 12281.16 00:05:05.235 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p4 : 5.02 10448.39 40.81 0.00 0.00 12230.37 326.66 12452.52 00:05:05.235 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p4 : 5.02 11756.64 45.92 0.00 0.00 10868.16 326.66 12281.16 00:05:05.235 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p5 : 5.02 10448.16 40.81 0.00 0.00 12228.45 326.66 12452.52 00:05:05.235 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p5 : 5.02 11756.41 45.92 0.00 0.00 10867.11 323.09 12338.28 00:05:05.235 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p6 : 5.02 10447.93 40.81 0.00 0.00 12227.72 328.45 12509.65 00:05:05.235 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p6 : 5.02 11756.16 45.92 0.00 0.00 10865.99 317.74 12338.28 00:05:05.235 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x200 00:05:05.235 Malloc2p7 : 5.02 10447.71 40.81 0.00 0.00 12225.87 330.23 12681.01 00:05:05.235 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x200 length 0x200 00:05:05.235 Malloc2p7 : 5.02 11755.91 45.92 0.00 0.00 10864.87 330.23 12338.28 00:05:05.235 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x1000 00:05:05.235 TestPT : 5.02 10431.37 40.75 0.00 0.00 12244.55 821.12 12681.01 00:05:05.235 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x1000 length 0x1000 00:05:05.235 TestPT : 5.02 4669.20 18.24 0.00 0.00 27357.39 227.59 63976.27 00:05:05.235 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x2000 00:05:05.235 raid0 : 5.02 10447.27 40.81 0.00 0.00 12222.93 339.16 12795.25 00:05:05.235 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x2000 length 0x2000 00:05:05.235 raid0 : 5.02 11755.14 45.92 0.00 0.00 10861.81 337.37 12166.92 00:05:05.235 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x2000 00:05:05.235 concat0 : 5.02 10446.98 40.81 0.00 0.00 12221.34 330.23 12224.04 00:05:05.235 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x2000 length 0x2000 00:05:05.235 concat0 : 5.02 11754.86 45.92 0.00 0.00 10860.56 340.94 12166.92 00:05:05.235 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x1000 00:05:05.235 raid1 : 5.02 10446.71 40.81 0.00 0.00 12219.93 396.28 11367.21 00:05:05.235 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x1000 length 0x1000 00:05:05.235 raid1 : 5.02 11754.58 45.92 0.00 0.00 10859.14 385.57 12166.92 00:05:05.235 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x0 length 0x4e2 00:05:05.235 AIO0 : 5.15 745.81 2.91 0.00 0.00 169432.60 8739.62 281495.58 00:05:05.235 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:05.235 Verification LBA range: start 0x4e2 length 0x4e2 00:05:05.235 AIO0 : 5.15 749.62 2.93 0.00 0.00 168453.16 7939.91 297946.62 00:05:05.235 =================================================================================================================== 00:05:05.235 Total : 317850.38 1241.60 0.00 0.00 12873.91 209.74 5030362.53 00:05:05.235 00:05:05.235 real 0m6.125s 00:05:05.235 user 0m10.879s 00:05:05.235 sys 0m0.521s 00:05:05.235 13:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.235 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.235 ************************************ 00:05:05.235 END TEST bdev_verify 00:05:05.235 ************************************ 00:05:05.235 13:28:44 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:05.235 13:28:44 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:05:05.235 13:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.235 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.235 ************************************ 00:05:05.235 START TEST bdev_verify_big_io 00:05:05.235 ************************************ 00:05:05.235 13:28:44 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:05.235 [2024-07-10 13:28:44.120764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:05.235 [2024-07-10 13:28:44.120996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:05.235 EAL: TSC is not safe to use in SMP mode 00:05:05.235 EAL: TSC is not invariant 00:05:05.235 [2024-07-10 13:28:44.551994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.494 [2024-07-10 13:28:44.635586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.494 [2024-07-10 13:28:44.635582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.494 [2024-07-10 13:28:44.689965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:05.494 [2024-07-10 13:28:44.690001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:05.494 [2024-07-10 13:28:44.697976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:05.494 [2024-07-10 13:28:44.698020] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:05.494 [2024-07-10 13:28:44.705975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:05.494 [2024-07-10 13:28:44.706004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:05.494 [2024-07-10 13:28:44.706011] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:05.494 [2024-07-10 13:28:44.753974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:05.494 [2024-07-10 13:28:44.754008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.494 [2024-07-10 13:28:44.754019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829cf9800 00:05:05.494 [2024-07-10 13:28:44.754024] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.494 [2024-07-10 13:28:44.754299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.494 [2024-07-10 13:28:44.754318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:05.754 [2024-07-10 13:28:44.854623] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.854749] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.854806] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.854870] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.854941] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855014] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855116] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855210] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855309] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855410] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855510] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855609] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855701] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855808] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.855907] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.856016] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:05.754 [2024-07-10 13:28:44.857181] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:05.754 [2024-07-10 13:28:44.857319] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:05.754 Running I/O for 5 seconds... 00:05:11.139 00:05:11.139 Latency(us) 00:05:11.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:11.139 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x100 00:05:11.139 Malloc0 : 5.05 4572.85 285.80 0.00 0.00 27871.13 1927.86 90480.72 00:05:11.139 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x100 length 0x100 00:05:11.139 Malloc0 : 5.05 4953.95 309.62 0.00 0.00 25732.91 1870.73 110587.55 00:05:11.139 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x80 00:05:11.139 Malloc1p0 : 5.06 3022.38 188.90 0.00 0.00 42112.27 3084.57 83626.12 00:05:11.139 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x80 length 0x80 00:05:11.139 Malloc1p0 : 5.06 2503.43 156.46 0.00 0.00 50867.86 2927.49 79513.36 00:05:11.139 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x80 00:05:11.139 Malloc1p1 : 5.07 1166.64 72.91 0.00 0.00 109104.34 2627.60 196498.54 00:05:11.139 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x80 length 0x80 00:05:11.139 Malloc1p1 : 5.07 1272.78 79.55 0.00 0.00 100003.36 2684.72 184617.23 00:05:11.139 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p0 : 5.05 772.93 48.31 0.00 0.00 41134.42 785.42 56436.21 00:05:11.139 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p0 : 5.05 840.63 52.54 0.00 0.00 37827.22 785.42 51181.01 00:05:11.139 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p1 : 5.05 772.90 48.31 0.00 0.00 41121.09 860.40 56436.21 00:05:11.139 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p1 : 5.05 840.59 52.54 0.00 0.00 37817.22 849.68 51409.50 00:05:11.139 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p2 : 5.06 776.15 48.51 0.00 0.00 40972.44 796.13 56436.21 00:05:11.139 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p2 : 5.05 840.55 52.53 0.00 0.00 37804.51 788.99 51637.99 00:05:11.139 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p3 : 5.06 776.12 48.51 0.00 0.00 40960.56 821.12 56664.69 00:05:11.139 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p3 : 5.05 840.52 52.53 0.00 0.00 37793.06 828.26 51866.47 00:05:11.139 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p4 : 5.06 776.09 48.51 0.00 0.00 40947.91 835.40 56664.69 00:05:11.139 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p4 : 5.05 840.48 52.53 0.00 0.00 37783.09 792.56 52094.96 00:05:11.139 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p5 : 5.06 776.07 48.50 0.00 0.00 40934.52 817.55 56664.69 00:05:11.139 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p5 : 5.06 843.60 52.72 0.00 0.00 37660.92 824.69 52323.45 00:05:11.139 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p6 : 5.06 776.04 48.50 0.00 0.00 40924.54 806.84 56436.21 00:05:11.139 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p6 : 5.06 843.56 52.72 0.00 0.00 37650.09 817.55 52551.93 00:05:11.139 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x20 00:05:11.139 Malloc2p7 : 5.06 776.01 48.50 0.00 0.00 40911.54 835.40 56664.69 00:05:11.139 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x20 length 0x20 00:05:11.139 Malloc2p7 : 5.06 843.53 52.72 0.00 0.00 37639.35 828.26 52551.93 00:05:11.139 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x100 00:05:11.139 TestPT : 5.13 994.48 62.15 0.00 0.00 126841.77 9996.29 244937.71 00:05:11.139 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x100 length 0x100 00:05:11.139 TestPT : 5.25 24.76 1.55 0.00 0.00 5084657.44 2984.61 5205840.29 00:05:11.139 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x0 length 0x200 00:05:11.139 raid0 : 5.07 1171.84 73.24 0.00 0.00 108223.79 2870.36 197412.48 00:05:11.139 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.139 Verification LBA range: start 0x200 length 0x200 00:05:11.139 raid0 : 5.07 1279.52 79.97 0.00 0.00 99136.04 3041.73 186445.12 00:05:11.140 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x0 length 0x200 00:05:11.140 concat0 : 5.08 1171.82 73.24 0.00 0.00 108071.87 2841.80 196498.54 00:05:11.140 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x200 length 0x200 00:05:11.140 concat0 : 5.07 1278.65 79.92 0.00 0.00 99040.99 2956.05 186445.12 00:05:11.140 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x0 length 0x100 00:05:11.140 raid1 : 5.08 1177.05 73.57 0.00 0.00 107491.30 3255.94 196498.54 00:05:11.140 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x100 length 0x100 00:05:11.140 raid1 : 5.07 1284.11 80.26 0.00 0.00 98525.51 3141.69 186445.12 00:05:11.140 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x0 length 0x4e 00:05:11.140 AIO0 : 5.07 1152.55 72.03 0.00 0.00 66817.30 1649.39 113786.36 00:05:11.140 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:05:11.140 Verification LBA range: start 0x4e length 0x4e 00:05:11.140 AIO0 : 5.07 1259.55 78.72 0.00 0.00 61115.70 1842.17 108302.68 00:05:11.140 =================================================================================================================== 00:05:11.140 Total : 41222.14 2576.38 0.00 0.00 59334.14 785.42 5205840.29 00:05:11.140 00:05:11.140 real 0m6.238s 00:05:11.140 user 0m11.153s 00:05:11.140 sys 0m0.680s 00:05:11.140 13:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.140 13:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:11.140 ************************************ 00:05:11.140 END TEST bdev_verify_big_io 00:05:11.140 ************************************ 00:05:11.140 13:28:50 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:11.140 13:28:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:11.140 13:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.140 13:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:11.140 ************************************ 00:05:11.140 START TEST bdev_write_zeroes 00:05:11.140 ************************************ 00:05:11.140 13:28:50 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:11.140 [2024-07-10 13:28:50.415760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:11.140 [2024-07-10 13:28:50.416132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:11.709 EAL: TSC is not safe to use in SMP mode 00:05:11.709 EAL: TSC is not invariant 00:05:11.709 [2024-07-10 13:28:50.840639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.709 [2024-07-10 13:28:50.919549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.709 [2024-07-10 13:28:50.975321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:11.709 [2024-07-10 13:28:50.975385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:11.709 [2024-07-10 13:28:50.983310] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:11.709 [2024-07-10 13:28:50.983337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:11.709 [2024-07-10 13:28:50.991325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.709 [2024-07-10 13:28:50.991351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:11.709 [2024-07-10 13:28:50.991358] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:11.709 [2024-07-10 13:28:51.039348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.709 [2024-07-10 13:28:51.039413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.709 [2024-07-10 13:28:51.039432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7d8800 00:05:11.709 [2024-07-10 13:28:51.039439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.709 [2024-07-10 13:28:51.039869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.709 [2024-07-10 13:28:51.039895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:11.967 Running I/O for 1 seconds... 00:05:12.905 00:05:12.905 Latency(us) 00:05:12.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:12.905 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc0 : 1.00 34536.32 134.91 0.00 0.00 3705.80 142.80 6254.82 00:05:12.905 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc1p0 : 1.00 34531.23 134.89 0.00 0.00 3704.61 161.55 6169.14 00:05:12.905 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc1p1 : 1.00 34528.34 134.88 0.00 0.00 3704.11 167.79 6026.34 00:05:12.905 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p0 : 1.00 34525.21 134.86 0.00 0.00 3702.83 168.69 5997.78 00:05:12.905 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p1 : 1.00 34522.46 134.85 0.00 0.00 3701.91 157.98 5883.53 00:05:12.905 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p2 : 1.01 34558.88 135.00 0.00 0.00 3697.00 162.44 5769.29 00:05:12.905 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p3 : 1.01 34553.15 134.97 0.00 0.00 3696.32 152.62 5683.61 00:05:12.905 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p4 : 1.01 34550.48 134.96 0.00 0.00 3695.49 151.73 5597.92 00:05:12.905 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p5 : 1.01 34547.84 134.95 0.00 0.00 3695.02 157.08 5569.36 00:05:12.905 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p6 : 1.01 34544.93 134.94 0.00 0.00 3693.54 158.87 5540.80 00:05:12.905 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 Malloc2p7 : 1.01 34542.46 134.93 0.00 0.00 3692.65 157.98 5483.68 00:05:12.905 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 TestPT : 1.01 34539.61 134.92 0.00 0.00 3692.28 158.87 5369.44 00:05:12.905 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 raid0 : 1.01 34536.26 134.91 0.00 0.00 3690.52 207.07 5426.56 00:05:12.905 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 concat0 : 1.01 34527.80 134.87 0.00 0.00 3689.46 214.21 5312.32 00:05:12.905 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 raid1 : 1.01 34521.12 134.85 0.00 0.00 3688.63 365.94 5226.63 00:05:12.905 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:12.905 AIO0 : 1.08 2041.26 7.97 0.00 0.00 60285.17 674.75 175477.76 00:05:12.905 =================================================================================================================== 00:05:12.905 Total : 520107.35 2031.67 0.00 0.00 3934.67 142.80 175477.76 00:05:13.164 00:05:13.164 real 0m2.024s 00:05:13.164 user 0m1.383s 00:05:13.164 sys 0m0.488s 00:05:13.164 13:28:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.164 13:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.164 ************************************ 00:05:13.164 END TEST bdev_write_zeroes 00:05:13.164 ************************************ 00:05:13.164 13:28:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:13.164 13:28:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:13.164 13:28:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.164 13:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.164 ************************************ 00:05:13.164 START TEST bdev_json_nonenclosed 00:05:13.164 ************************************ 00:05:13.164 13:28:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:13.164 [2024-07-10 13:28:52.482961] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.164 [2024-07-10 13:28:52.483312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:13.731 EAL: TSC is not safe to use in SMP mode 00:05:13.731 EAL: TSC is not invariant 00:05:13.731 [2024-07-10 13:28:52.920882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.731 [2024-07-10 13:28:53.008830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.731 [2024-07-10 13:28:53.008960] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:13.731 [2024-07-10 13:28:53.008973] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.988 00:05:13.988 real 0m0.634s 00:05:13.988 user 0m0.160s 00:05:13.988 sys 0m0.473s 00:05:13.988 13:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.988 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.988 ************************************ 00:05:13.988 END TEST bdev_json_nonenclosed 00:05:13.988 ************************************ 00:05:13.988 13:28:53 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:13.988 13:28:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:13.988 13:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.988 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.988 ************************************ 00:05:13.988 START TEST bdev_json_nonarray 00:05:13.988 ************************************ 00:05:13.988 13:28:53 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:13.988 [2024-07-10 13:28:53.164067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.988 [2024-07-10 13:28:53.164449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:14.245 EAL: TSC is not safe to use in SMP mode 00:05:14.245 EAL: TSC is not invariant 00:05:14.503 [2024-07-10 13:28:53.598142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.503 [2024-07-10 13:28:53.687375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.503 [2024-07-10 13:28:53.687494] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:14.503 [2024-07-10 13:28:53.687504] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.503 00:05:14.503 real 0m0.630s 00:05:14.503 user 0m0.156s 00:05:14.503 sys 0m0.472s 00:05:14.503 13:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.503 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.503 ************************************ 00:05:14.503 END TEST bdev_json_nonarray 00:05:14.503 ************************************ 00:05:14.503 13:28:53 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:05:14.503 13:28:53 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:05:14.503 13:28:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:14.503 13:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.503 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.503 ************************************ 00:05:14.503 START TEST bdev_qos 00:05:14.503 ************************************ 00:05:14.503 13:28:53 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:05:14.503 13:28:53 -- bdev/blockdev.sh@444 -- # QOS_PID=47285 00:05:14.503 13:28:53 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 47285' 00:05:14.503 Process qos testing pid: 47285 00:05:14.503 13:28:53 -- bdev/blockdev.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:05:14.503 13:28:53 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:05:14.503 13:28:53 -- bdev/blockdev.sh@447 -- # waitforlisten 47285 00:05:14.503 13:28:53 -- common/autotest_common.sh@819 -- # '[' -z 47285 ']' 00:05:14.503 13:28:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.503 13:28:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.503 13:28:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.503 13:28:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.503 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.503 [2024-07-10 13:28:53.848061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:14.503 [2024-07-10 13:28:53.848308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:15.069 EAL: TSC is not safe to use in SMP mode 00:05:15.069 EAL: TSC is not invariant 00:05:15.069 [2024-07-10 13:28:54.299748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.069 [2024-07-10 13:28:54.386769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.637 13:28:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.637 13:28:54 -- common/autotest_common.sh@852 -- # return 0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 Malloc_0 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:05:15.637 13:28:54 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:05:15.637 13:28:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:15.637 13:28:54 -- common/autotest_common.sh@889 -- # local i 00:05:15.637 13:28:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:15.637 13:28:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 [ 00:05:15.637 { 00:05:15.637 "name": "Malloc_0", 00:05:15.637 "aliases": [ 00:05:15.637 "5ab00176-3ec0-11ef-b9c4-5b09e08d4792" 00:05:15.637 ], 00:05:15.637 "product_name": "Malloc disk", 00:05:15.637 "block_size": 512, 00:05:15.637 "num_blocks": 262144, 00:05:15.637 "uuid": "5ab00176-3ec0-11ef-b9c4-5b09e08d4792", 00:05:15.637 "assigned_rate_limits": { 00:05:15.637 "rw_ios_per_sec": 0, 00:05:15.637 "rw_mbytes_per_sec": 0, 00:05:15.637 "r_mbytes_per_sec": 0, 00:05:15.637 "w_mbytes_per_sec": 0 00:05:15.637 }, 00:05:15.637 "claimed": false, 00:05:15.637 "zoned": false, 00:05:15.637 "supported_io_types": { 00:05:15.637 "read": true, 00:05:15.637 "write": true, 00:05:15.637 "unmap": true, 00:05:15.637 "write_zeroes": true, 00:05:15.637 "flush": true, 00:05:15.637 "reset": true, 00:05:15.637 "compare": false, 00:05:15.637 "compare_and_write": false, 00:05:15.637 "abort": true, 00:05:15.637 "nvme_admin": false, 00:05:15.637 "nvme_io": false 00:05:15.637 }, 00:05:15.637 "memory_domains": [ 00:05:15.637 { 00:05:15.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.637 "dma_device_type": 2 00:05:15.637 } 00:05:15.637 ], 00:05:15.637 "driver_specific": {} 00:05:15.637 } 00:05:15.637 ] 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@895 -- # return 0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 Null_1 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:05:15.637 13:28:54 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:05:15.637 13:28:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:15.637 13:28:54 -- common/autotest_common.sh@889 -- # local i 00:05:15.637 13:28:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:15.637 13:28:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:05:15.637 13:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.637 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 [ 00:05:15.637 { 00:05:15.637 "name": "Null_1", 00:05:15.637 "aliases": [ 00:05:15.637 "5ab61b80-3ec0-11ef-b9c4-5b09e08d4792" 00:05:15.637 ], 00:05:15.637 "product_name": "Null disk", 00:05:15.637 "block_size": 512, 00:05:15.637 "num_blocks": 262144, 00:05:15.637 "uuid": "5ab61b80-3ec0-11ef-b9c4-5b09e08d4792", 00:05:15.637 "assigned_rate_limits": { 00:05:15.637 "rw_ios_per_sec": 0, 00:05:15.637 "rw_mbytes_per_sec": 0, 00:05:15.637 "r_mbytes_per_sec": 0, 00:05:15.637 "w_mbytes_per_sec": 0 00:05:15.637 }, 00:05:15.637 "claimed": false, 00:05:15.637 "zoned": false, 00:05:15.637 "supported_io_types": { 00:05:15.637 "read": true, 00:05:15.637 "write": true, 00:05:15.637 "unmap": false, 00:05:15.637 "write_zeroes": true, 00:05:15.637 "flush": false, 00:05:15.637 "reset": true, 00:05:15.637 "compare": false, 00:05:15.637 "compare_and_write": false, 00:05:15.637 "abort": true, 00:05:15.637 "nvme_admin": false, 00:05:15.637 "nvme_io": false 00:05:15.637 }, 00:05:15.637 "driver_specific": {} 00:05:15.637 } 00:05:15.637 ] 00:05:15.637 13:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.637 13:28:54 -- common/autotest_common.sh@895 -- # return 0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@455 -- # qos_function_test 00:05:15.637 13:28:54 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:05:15.637 13:28:54 -- bdev/blockdev.sh@454 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:15.637 13:28:54 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:05:15.637 13:28:54 -- bdev/blockdev.sh@410 -- # local io_result=0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:05:15.637 13:28:54 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:15.637 13:28:54 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:15.637 13:28:54 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:05:15.637 13:28:54 -- bdev/blockdev.sh@376 -- # tail -1 00:05:15.637 Running I/O for 60 seconds... 00:05:22.199 13:29:00 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 736991.78 2947967.14 0.00 0.00 3176448.00 0.00 0.00 ' 00:05:22.199 13:29:00 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:05:22.199 13:29:00 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:05:22.199 13:29:00 -- bdev/blockdev.sh@378 -- # iostat_result=736991.78 00:05:22.199 13:29:00 -- bdev/blockdev.sh@383 -- # echo 736991 00:05:22.199 13:29:00 -- bdev/blockdev.sh@414 -- # io_result=736991 00:05:22.199 13:29:00 -- bdev/blockdev.sh@416 -- # iops_limit=184000 00:05:22.199 13:29:00 -- bdev/blockdev.sh@417 -- # '[' 184000 -gt 1000 ']' 00:05:22.199 13:29:00 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 184000 Malloc_0 00:05:22.199 13:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.199 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.199 13:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.199 13:29:00 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 184000 IOPS Malloc_0 00:05:22.199 13:29:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:22.199 13:29:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.199 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.199 ************************************ 00:05:22.199 START TEST bdev_qos_iops 00:05:22.199 ************************************ 00:05:22.199 13:29:00 -- common/autotest_common.sh@1104 -- # run_qos_test 184000 IOPS Malloc_0 00:05:22.199 13:29:00 -- bdev/blockdev.sh@387 -- # local qos_limit=184000 00:05:22.199 13:29:00 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:05:22.199 13:29:00 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:05:22.199 13:29:00 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:05:22.199 13:29:00 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:05:22.199 13:29:00 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:22.199 13:29:00 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:22.199 13:29:00 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:05:22.199 13:29:00 -- bdev/blockdev.sh@376 -- # tail -1 00:05:27.475 13:29:05 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 184081.63 736326.52 0.00 0.00 795168.00 0.00 0.00 ' 00:05:27.475 13:29:05 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:05:27.475 13:29:05 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:05:27.475 13:29:05 -- bdev/blockdev.sh@378 -- # iostat_result=184081.63 00:05:27.475 13:29:05 -- bdev/blockdev.sh@383 -- # echo 184081 00:05:27.475 13:29:05 -- bdev/blockdev.sh@390 -- # qos_result=184081 00:05:27.475 13:29:05 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:05:27.475 13:29:05 -- bdev/blockdev.sh@394 -- # lower_limit=165600 00:05:27.475 13:29:05 -- bdev/blockdev.sh@395 -- # upper_limit=202400 00:05:27.475 13:29:05 -- bdev/blockdev.sh@398 -- # '[' 184081 -lt 165600 ']' 00:05:27.475 13:29:05 -- bdev/blockdev.sh@398 -- # '[' 184081 -gt 202400 ']' 00:05:27.475 00:05:27.475 real 0m5.498s 00:05:27.475 user 0m0.118s 00:05:27.475 sys 0m0.026s 00:05:27.475 13:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.475 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.475 ************************************ 00:05:27.475 END TEST bdev_qos_iops 00:05:27.475 ************************************ 00:05:27.475 13:29:05 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:05:27.475 13:29:05 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:05:27.475 13:29:05 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:05:27.475 13:29:05 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:27.475 13:29:05 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:27.475 13:29:05 -- bdev/blockdev.sh@376 -- # grep Null_1 00:05:27.475 13:29:05 -- bdev/blockdev.sh@376 -- # tail -1 00:05:32.746 13:29:11 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 608509.05 2434036.21 0.00 0.00 2623488.00 0.00 0.00 ' 00:05:32.746 13:29:11 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:05:32.746 13:29:11 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:32.746 13:29:11 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:05:32.746 13:29:11 -- bdev/blockdev.sh@380 -- # iostat_result=2623488.00 00:05:32.746 13:29:11 -- bdev/blockdev.sh@383 -- # echo 2623488 00:05:32.746 13:29:11 -- bdev/blockdev.sh@425 -- # bw_limit=2623488 00:05:32.746 13:29:11 -- bdev/blockdev.sh@426 -- # bw_limit=256 00:05:32.746 13:29:11 -- bdev/blockdev.sh@427 -- # '[' 256 -lt 2 ']' 00:05:32.746 13:29:11 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 256 Null_1 00:05:32.746 13:29:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.746 13:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:32.746 13:29:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.746 13:29:11 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 256 BANDWIDTH Null_1 00:05:32.746 13:29:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:32.746 13:29:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.746 13:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:32.746 ************************************ 00:05:32.746 START TEST bdev_qos_bw 00:05:32.746 ************************************ 00:05:32.746 13:29:11 -- common/autotest_common.sh@1104 -- # run_qos_test 256 BANDWIDTH Null_1 00:05:32.746 13:29:11 -- bdev/blockdev.sh@387 -- # local qos_limit=256 00:05:32.746 13:29:11 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:05:32.746 13:29:11 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:05:32.746 13:29:11 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:05:32.746 13:29:11 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:05:32.746 13:29:11 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:32.746 13:29:11 -- bdev/blockdev.sh@376 -- # grep Null_1 00:05:32.746 13:29:11 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:32.746 13:29:11 -- bdev/blockdev.sh@376 -- # tail -1 00:05:38.017 13:29:16 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 65558.17 262232.69 0.00 0.00 272368.00 0.00 0.00 ' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@380 -- # iostat_result=272368.00 00:05:38.017 13:29:16 -- bdev/blockdev.sh@383 -- # echo 272368 00:05:38.017 13:29:16 -- bdev/blockdev.sh@390 -- # qos_result=272368 00:05:38.017 13:29:16 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@392 -- # qos_limit=262144 00:05:38.017 13:29:16 -- bdev/blockdev.sh@394 -- # lower_limit=235929 00:05:38.017 13:29:16 -- bdev/blockdev.sh@395 -- # upper_limit=288358 00:05:38.017 13:29:16 -- bdev/blockdev.sh@398 -- # '[' 272368 -lt 235929 ']' 00:05:38.017 13:29:16 -- bdev/blockdev.sh@398 -- # '[' 272368 -gt 288358 ']' 00:05:38.017 00:05:38.017 real 0m5.463s 00:05:38.017 user 0m0.088s 00:05:38.017 sys 0m0.061s 00:05:38.017 13:29:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.017 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.017 ************************************ 00:05:38.017 END TEST bdev_qos_bw 00:05:38.017 ************************************ 00:05:38.017 13:29:16 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:05:38.017 13:29:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.017 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.017 13:29:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.017 13:29:16 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:05:38.017 13:29:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:38.017 13:29:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.017 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.017 ************************************ 00:05:38.017 START TEST bdev_qos_ro_bw 00:05:38.017 ************************************ 00:05:38.017 13:29:16 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:05:38.017 13:29:16 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:05:38.017 13:29:16 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:05:38.017 13:29:16 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:05:38.017 13:29:16 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:05:38.017 13:29:16 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:05:38.017 13:29:16 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:38.017 13:29:16 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:38.017 13:29:16 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:05:38.017 13:29:16 -- bdev/blockdev.sh@376 -- # tail -1 00:05:43.301 13:29:22 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.94 2047.75 0.00 0.00 2212.00 0.00 0.00 ' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@380 -- # iostat_result=2212.00 00:05:43.301 13:29:22 -- bdev/blockdev.sh@383 -- # echo 2212 00:05:43.301 13:29:22 -- bdev/blockdev.sh@390 -- # qos_result=2212 00:05:43.301 13:29:22 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:05:43.301 13:29:22 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:05:43.301 13:29:22 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:05:43.301 13:29:22 -- bdev/blockdev.sh@398 -- # '[' 2212 -lt 1843 ']' 00:05:43.301 13:29:22 -- bdev/blockdev.sh@398 -- # '[' 2212 -gt 2252 ']' 00:05:43.301 00:05:43.301 real 0m5.474s 00:05:43.301 user 0m0.140s 00:05:43.301 sys 0m0.017s 00:05:43.301 13:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.301 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.301 ************************************ 00:05:43.301 END TEST bdev_qos_ro_bw 00:05:43.301 ************************************ 00:05:43.301 13:29:22 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:05:43.301 13:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.301 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.560 13:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.560 13:29:22 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:05:43.560 13:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.560 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 00:05:43.819 Latency(us) 00:05:43.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:43.819 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:05:43.819 Malloc_0 : 27.92 250506.72 978.54 0.00 0.00 1012.27 319.52 504498.57 00:05:43.819 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:05:43.819 Null_1 : 27.95 420530.30 1642.70 0.00 0.00 608.44 47.30 22048.96 00:05:43.819 =================================================================================================================== 00:05:43.819 Total : 671037.02 2621.24 0.00 0.00 759.10 47.30 504498.57 00:05:43.819 0 00:05:43.819 13:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.819 13:29:22 -- bdev/blockdev.sh@459 -- # killprocess 47285 00:05:43.819 13:29:22 -- common/autotest_common.sh@926 -- # '[' -z 47285 ']' 00:05:43.819 13:29:22 -- common/autotest_common.sh@930 -- # kill -0 47285 00:05:43.819 13:29:22 -- common/autotest_common.sh@931 -- # uname 00:05:43.819 13:29:22 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:43.819 13:29:22 -- common/autotest_common.sh@934 -- # ps -c -o command 47285 00:05:43.819 13:29:22 -- common/autotest_common.sh@934 -- # tail -1 00:05:43.819 13:29:22 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:05:43.819 13:29:22 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:05:43.819 killing process with pid 47285 00:05:43.819 13:29:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47285' 00:05:43.819 13:29:22 -- common/autotest_common.sh@945 -- # kill 47285 00:05:43.819 Received shutdown signal, test time was about 27.972950 seconds 00:05:43.819 00:05:43.819 Latency(us) 00:05:43.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:43.819 =================================================================================================================== 00:05:43.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:43.819 13:29:22 -- common/autotest_common.sh@950 -- # wait 47285 00:05:43.819 13:29:23 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:05:43.819 00:05:43.819 real 0m29.238s 00:05:43.819 user 0m29.797s 00:05:43.819 sys 0m0.881s 00:05:43.819 13:29:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.819 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 ************************************ 00:05:43.819 END TEST bdev_qos 00:05:43.819 ************************************ 00:05:43.819 13:29:23 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:05:43.819 13:29:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:43.819 13:29:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.819 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 ************************************ 00:05:43.819 START TEST bdev_qd_sampling 00:05:43.819 ************************************ 00:05:43.819 13:29:23 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:05:43.819 13:29:23 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:05:43.819 13:29:23 -- bdev/blockdev.sh@539 -- # QD_PID=47398 00:05:43.819 Process bdev QD sampling period testing pid: 47398 00:05:43.819 13:29:23 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 47398' 00:05:43.819 13:29:23 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:05:43.819 13:29:23 -- bdev/blockdev.sh@538 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:05:43.819 13:29:23 -- bdev/blockdev.sh@542 -- # waitforlisten 47398 00:05:43.819 13:29:23 -- common/autotest_common.sh@819 -- # '[' -z 47398 ']' 00:05:43.819 13:29:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.819 13:29:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.819 13:29:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.819 13:29:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.819 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 [2024-07-10 13:29:23.132240] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.819 [2024-07-10 13:29:23.132626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:44.387 EAL: TSC is not safe to use in SMP mode 00:05:44.387 EAL: TSC is not invariant 00:05:44.387 [2024-07-10 13:29:23.575508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.387 [2024-07-10 13:29:23.665602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.387 [2024-07-10 13:29:23.665602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.954 13:29:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.954 13:29:24 -- common/autotest_common.sh@852 -- # return 0 00:05:44.954 13:29:24 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:05:44.954 13:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.954 13:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:44.954 Malloc_QD 00:05:44.954 13:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.954 13:29:24 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:05:44.954 13:29:24 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:05:44.954 13:29:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:44.954 13:29:24 -- common/autotest_common.sh@889 -- # local i 00:05:44.954 13:29:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:44.954 13:29:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:44.954 13:29:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:44.954 13:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.954 13:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:44.955 13:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.955 13:29:24 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:05:44.955 13:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.955 13:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:44.955 [ 00:05:44.955 { 00:05:44.955 "name": "Malloc_QD", 00:05:44.955 "aliases": [ 00:05:44.955 "6c1f56fc-3ec0-11ef-b9c4-5b09e08d4792" 00:05:44.955 ], 00:05:44.955 "product_name": "Malloc disk", 00:05:44.955 "block_size": 512, 00:05:44.955 "num_blocks": 262144, 00:05:44.955 "uuid": "6c1f56fc-3ec0-11ef-b9c4-5b09e08d4792", 00:05:44.955 "assigned_rate_limits": { 00:05:44.955 "rw_ios_per_sec": 0, 00:05:44.955 "rw_mbytes_per_sec": 0, 00:05:44.955 "r_mbytes_per_sec": 0, 00:05:44.955 "w_mbytes_per_sec": 0 00:05:44.955 }, 00:05:44.955 "claimed": false, 00:05:44.955 "zoned": false, 00:05:44.955 "supported_io_types": { 00:05:44.955 "read": true, 00:05:44.955 "write": true, 00:05:44.955 "unmap": true, 00:05:44.955 "write_zeroes": true, 00:05:44.955 "flush": true, 00:05:44.955 "reset": true, 00:05:44.955 "compare": false, 00:05:44.955 "compare_and_write": false, 00:05:44.955 "abort": true, 00:05:44.955 "nvme_admin": false, 00:05:44.955 "nvme_io": false 00:05:44.955 }, 00:05:44.955 "memory_domains": [ 00:05:44.955 { 00:05:44.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.955 "dma_device_type": 2 00:05:44.955 } 00:05:44.955 ], 00:05:44.955 "driver_specific": {} 00:05:44.955 } 00:05:44.955 ] 00:05:44.955 13:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.955 13:29:24 -- common/autotest_common.sh@895 -- # return 0 00:05:44.955 13:29:24 -- bdev/blockdev.sh@548 -- # sleep 2 00:05:44.955 13:29:24 -- bdev/blockdev.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:44.955 Running I/O for 5 seconds... 00:05:46.864 13:29:26 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:05:46.864 13:29:26 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:05:46.864 13:29:26 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:05:46.864 13:29:26 -- bdev/blockdev.sh@519 -- # local iostats 00:05:46.864 13:29:26 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:05:46.864 13:29:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.864 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 13:29:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.864 13:29:26 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:05:46.864 13:29:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.864 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 13:29:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.864 13:29:26 -- bdev/blockdev.sh@523 -- # iostats='{ 00:05:46.864 "tick_rate": 2294610885, 00:05:46.864 "ticks": 735911325902, 00:05:46.864 "bdevs": [ 00:05:46.864 { 00:05:46.864 "name": "Malloc_QD", 00:05:46.864 "bytes_read": 14206145024, 00:05:46.864 "num_read_ops": 3468291, 00:05:46.864 "bytes_written": 0, 00:05:46.864 "num_write_ops": 0, 00:05:46.864 "bytes_unmapped": 0, 00:05:46.864 "num_unmap_ops": 0, 00:05:46.864 "bytes_copied": 0, 00:05:46.864 "num_copy_ops": 0, 00:05:46.864 "read_latency_ticks": 2328181620434, 00:05:46.864 "max_read_latency_ticks": 922994, 00:05:46.864 "min_read_latency_ticks": 34378, 00:05:46.864 "write_latency_ticks": 0, 00:05:46.864 "max_write_latency_ticks": 0, 00:05:46.864 "min_write_latency_ticks": 0, 00:05:46.864 "unmap_latency_ticks": 0, 00:05:46.864 "max_unmap_latency_ticks": 0, 00:05:46.864 "min_unmap_latency_ticks": 0, 00:05:46.864 "copy_latency_ticks": 0, 00:05:46.864 "max_copy_latency_ticks": 0, 00:05:46.864 "min_copy_latency_ticks": 0, 00:05:46.864 "io_error": {}, 00:05:46.864 "queue_depth_polling_period": 10, 00:05:46.864 "queue_depth": 512, 00:05:46.864 "io_time": 360, 00:05:46.864 "weighted_io_time": 184320 00:05:46.864 } 00:05:46.864 ] 00:05:46.864 }' 00:05:46.864 13:29:26 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:05:46.864 13:29:26 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:05:46.864 13:29:26 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:05:46.864 13:29:26 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:05:46.864 13:29:26 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:05:46.864 13:29:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.864 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 00:05:46.864 Latency(us) 00:05:46.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:46.864 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:05:46.864 Malloc_QD : 2.02 871029.80 3402.46 0.00 0.00 293.70 57.12 403.42 00:05:46.864 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:05:46.864 Malloc_QD : 2.02 875203.48 3418.76 0.00 0.00 292.30 52.88 371.29 00:05:46.864 =================================================================================================================== 00:05:46.864 Total : 1746233.28 6821.22 0.00 0.00 292.99 52.88 403.42 00:05:46.864 0 00:05:46.864 13:29:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.125 13:29:26 -- bdev/blockdev.sh@552 -- # killprocess 47398 00:05:47.125 13:29:26 -- common/autotest_common.sh@926 -- # '[' -z 47398 ']' 00:05:47.125 13:29:26 -- common/autotest_common.sh@930 -- # kill -0 47398 00:05:47.125 13:29:26 -- common/autotest_common.sh@931 -- # uname 00:05:47.125 13:29:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:47.125 13:29:26 -- common/autotest_common.sh@934 -- # ps -c -o command 47398 00:05:47.125 13:29:26 -- common/autotest_common.sh@934 -- # tail -1 00:05:47.125 13:29:26 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:05:47.125 13:29:26 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:05:47.125 killing process with pid 47398 00:05:47.125 13:29:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47398' 00:05:47.125 13:29:26 -- common/autotest_common.sh@945 -- # kill 47398 00:05:47.125 Received shutdown signal, test time was about 2.054746 seconds 00:05:47.125 00:05:47.125 Latency(us) 00:05:47.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:47.125 =================================================================================================================== 00:05:47.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:47.125 13:29:26 -- common/autotest_common.sh@950 -- # wait 47398 00:05:47.125 13:29:26 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:05:47.125 00:05:47.125 real 0m3.253s 00:05:47.125 user 0m5.830s 00:05:47.125 sys 0m0.586s 00:05:47.125 13:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.125 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.125 ************************************ 00:05:47.125 END TEST bdev_qd_sampling 00:05:47.125 ************************************ 00:05:47.125 13:29:26 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:05:47.125 13:29:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:47.125 13:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.125 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.125 ************************************ 00:05:47.125 START TEST bdev_error 00:05:47.125 ************************************ 00:05:47.125 13:29:26 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:05:47.125 13:29:26 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:05:47.125 13:29:26 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:05:47.125 13:29:26 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:05:47.125 13:29:26 -- bdev/blockdev.sh@470 -- # ERR_PID=47429 00:05:47.125 13:29:26 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 47429' 00:05:47.125 Process error testing pid: 47429 00:05:47.125 13:29:26 -- bdev/blockdev.sh@469 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:05:47.125 13:29:26 -- bdev/blockdev.sh@472 -- # waitforlisten 47429 00:05:47.125 13:29:26 -- common/autotest_common.sh@819 -- # '[' -z 47429 ']' 00:05:47.125 13:29:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.125 13:29:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.125 13:29:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.125 13:29:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.125 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.125 [2024-07-10 13:29:26.439022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.125 [2024-07-10 13:29:26.439295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:47.695 EAL: TSC is not safe to use in SMP mode 00:05:47.695 EAL: TSC is not invariant 00:05:47.695 [2024-07-10 13:29:26.873335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.695 [2024-07-10 13:29:26.960818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.264 13:29:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.264 13:29:27 -- common/autotest_common.sh@852 -- # return 0 00:05:48.264 13:29:27 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 Dev_1 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:05:48.264 13:29:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:05:48.264 13:29:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:48.264 13:29:27 -- common/autotest_common.sh@889 -- # local i 00:05:48.264 13:29:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:48.264 13:29:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:48.264 13:29:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 [ 00:05:48.264 { 00:05:48.264 "name": "Dev_1", 00:05:48.264 "aliases": [ 00:05:48.264 "6e1a5e45-3ec0-11ef-b9c4-5b09e08d4792" 00:05:48.264 ], 00:05:48.264 "product_name": "Malloc disk", 00:05:48.264 "block_size": 512, 00:05:48.264 "num_blocks": 262144, 00:05:48.264 "uuid": "6e1a5e45-3ec0-11ef-b9c4-5b09e08d4792", 00:05:48.264 "assigned_rate_limits": { 00:05:48.264 "rw_ios_per_sec": 0, 00:05:48.264 "rw_mbytes_per_sec": 0, 00:05:48.264 "r_mbytes_per_sec": 0, 00:05:48.264 "w_mbytes_per_sec": 0 00:05:48.264 }, 00:05:48.264 "claimed": false, 00:05:48.264 "zoned": false, 00:05:48.264 "supported_io_types": { 00:05:48.264 "read": true, 00:05:48.264 "write": true, 00:05:48.264 "unmap": true, 00:05:48.264 "write_zeroes": true, 00:05:48.264 "flush": true, 00:05:48.264 "reset": true, 00:05:48.264 "compare": false, 00:05:48.264 "compare_and_write": false, 00:05:48.264 "abort": true, 00:05:48.264 "nvme_admin": false, 00:05:48.264 "nvme_io": false 00:05:48.264 }, 00:05:48.264 "memory_domains": [ 00:05:48.264 { 00:05:48.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.264 "dma_device_type": 2 00:05:48.264 } 00:05:48.264 ], 00:05:48.264 "driver_specific": {} 00:05:48.264 } 00:05:48.264 ] 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- common/autotest_common.sh@895 -- # return 0 00:05:48.264 13:29:27 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 true 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 Dev_2 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:05:48.264 13:29:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:05:48.264 13:29:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:48.264 13:29:27 -- common/autotest_common.sh@889 -- # local i 00:05:48.264 13:29:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:48.264 13:29:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:48.264 13:29:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.264 13:29:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:05:48.264 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.264 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.264 [ 00:05:48.264 { 00:05:48.264 "name": "Dev_2", 00:05:48.264 "aliases": [ 00:05:48.264 "6e21145e-3ec0-11ef-b9c4-5b09e08d4792" 00:05:48.264 ], 00:05:48.264 "product_name": "Malloc disk", 00:05:48.264 "block_size": 512, 00:05:48.264 "num_blocks": 262144, 00:05:48.264 "uuid": "6e21145e-3ec0-11ef-b9c4-5b09e08d4792", 00:05:48.264 "assigned_rate_limits": { 00:05:48.264 "rw_ios_per_sec": 0, 00:05:48.264 "rw_mbytes_per_sec": 0, 00:05:48.264 "r_mbytes_per_sec": 0, 00:05:48.264 "w_mbytes_per_sec": 0 00:05:48.264 }, 00:05:48.264 "claimed": false, 00:05:48.264 "zoned": false, 00:05:48.264 "supported_io_types": { 00:05:48.264 "read": true, 00:05:48.264 "write": true, 00:05:48.264 "unmap": true, 00:05:48.264 "write_zeroes": true, 00:05:48.264 "flush": true, 00:05:48.264 "reset": true, 00:05:48.264 "compare": false, 00:05:48.264 "compare_and_write": false, 00:05:48.264 "abort": true, 00:05:48.264 "nvme_admin": false, 00:05:48.264 "nvme_io": false 00:05:48.264 }, 00:05:48.264 "memory_domains": [ 00:05:48.265 { 00:05:48.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.265 "dma_device_type": 2 00:05:48.265 } 00:05:48.265 ], 00:05:48.265 "driver_specific": {} 00:05:48.265 } 00:05:48.265 ] 00:05:48.265 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.265 13:29:27 -- common/autotest_common.sh@895 -- # return 0 00:05:48.265 13:29:27 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:05:48.265 13:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.265 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.265 13:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.265 13:29:27 -- bdev/blockdev.sh@482 -- # sleep 1 00:05:48.265 13:29:27 -- bdev/blockdev.sh@481 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:05:48.265 Running I/O for 5 seconds... 00:05:49.201 Process is existed as continue on error is set. Pid: 47429 00:05:49.201 13:29:28 -- bdev/blockdev.sh@485 -- # kill -0 47429 00:05:49.201 13:29:28 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 47429' 00:05:49.201 13:29:28 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:05:49.201 13:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.201 13:29:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.201 13:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.201 13:29:28 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:05:49.201 13:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.201 13:29:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.460 13:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.460 13:29:28 -- bdev/blockdev.sh@495 -- # sleep 5 00:05:49.460 Timeout while waiting for response: 00:05:49.460 00:05:49.460 00:05:53.655 00:05:53.655 Latency(us) 00:05:53.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:53.655 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:05:53.655 EE_Dev_1 : 0.98 379640.01 1482.97 5.08 0.00 41.97 20.19 108.00 00:05:53.655 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:05:53.655 Dev_2 : 5.00 792826.82 3096.98 0.00 0.00 20.01 5.61 18050.45 00:05:53.655 =================================================================================================================== 00:05:53.655 Total : 1172466.84 4579.95 5.08 0.00 21.90 5.61 18050.45 00:05:54.590 13:29:33 -- bdev/blockdev.sh@497 -- # killprocess 47429 00:05:54.590 13:29:33 -- common/autotest_common.sh@926 -- # '[' -z 47429 ']' 00:05:54.590 13:29:33 -- common/autotest_common.sh@930 -- # kill -0 47429 00:05:54.590 13:29:33 -- common/autotest_common.sh@931 -- # uname 00:05:54.590 13:29:33 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:54.590 13:29:33 -- common/autotest_common.sh@934 -- # ps -c -o command 47429 00:05:54.590 13:29:33 -- common/autotest_common.sh@934 -- # tail -1 00:05:54.590 13:29:33 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:05:54.590 13:29:33 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:05:54.590 killing process with pid 47429 00:05:54.590 13:29:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47429' 00:05:54.590 13:29:33 -- common/autotest_common.sh@945 -- # kill 47429 00:05:54.590 Received shutdown signal, test time was about 5.000000 seconds 00:05:54.590 00:05:54.590 Latency(us) 00:05:54.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:54.590 =================================================================================================================== 00:05:54.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:54.590 13:29:33 -- common/autotest_common.sh@950 -- # wait 47429 00:05:54.590 13:29:33 -- bdev/blockdev.sh@501 -- # ERR_PID=47441 00:05:54.590 13:29:33 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 47441' 00:05:54.590 Process error testing pid: 47441 00:05:54.590 13:29:33 -- bdev/blockdev.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:05:54.590 13:29:33 -- bdev/blockdev.sh@503 -- # waitforlisten 47441 00:05:54.590 13:29:33 -- common/autotest_common.sh@819 -- # '[' -z 47441 ']' 00:05:54.590 13:29:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.590 13:29:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.590 13:29:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.590 13:29:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.590 13:29:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.590 [2024-07-10 13:29:33.908460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.590 [2024-07-10 13:29:33.908820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:55.158 EAL: TSC is not safe to use in SMP mode 00:05:55.158 EAL: TSC is not invariant 00:05:55.158 [2024-07-10 13:29:34.347306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.158 [2024-07-10 13:29:34.424093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.763 13:29:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.763 13:29:34 -- common/autotest_common.sh@852 -- # return 0 00:05:55.763 13:29:34 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 Dev_1 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:05:55.763 13:29:34 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:05:55.763 13:29:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:55.763 13:29:34 -- common/autotest_common.sh@889 -- # local i 00:05:55.763 13:29:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:55.763 13:29:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:55.763 13:29:34 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 [ 00:05:55.763 { 00:05:55.763 "name": "Dev_1", 00:05:55.763 "aliases": [ 00:05:55.763 "728f0bf7-3ec0-11ef-b9c4-5b09e08d4792" 00:05:55.763 ], 00:05:55.763 "product_name": "Malloc disk", 00:05:55.763 "block_size": 512, 00:05:55.763 "num_blocks": 262144, 00:05:55.763 "uuid": "728f0bf7-3ec0-11ef-b9c4-5b09e08d4792", 00:05:55.763 "assigned_rate_limits": { 00:05:55.763 "rw_ios_per_sec": 0, 00:05:55.763 "rw_mbytes_per_sec": 0, 00:05:55.763 "r_mbytes_per_sec": 0, 00:05:55.763 "w_mbytes_per_sec": 0 00:05:55.763 }, 00:05:55.763 "claimed": false, 00:05:55.763 "zoned": false, 00:05:55.763 "supported_io_types": { 00:05:55.763 "read": true, 00:05:55.763 "write": true, 00:05:55.763 "unmap": true, 00:05:55.763 "write_zeroes": true, 00:05:55.763 "flush": true, 00:05:55.763 "reset": true, 00:05:55.763 "compare": false, 00:05:55.763 "compare_and_write": false, 00:05:55.763 "abort": true, 00:05:55.763 "nvme_admin": false, 00:05:55.763 "nvme_io": false 00:05:55.763 }, 00:05:55.763 "memory_domains": [ 00:05:55.763 { 00:05:55.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.763 "dma_device_type": 2 00:05:55.763 } 00:05:55.763 ], 00:05:55.763 "driver_specific": {} 00:05:55.763 } 00:05:55.763 ] 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- common/autotest_common.sh@895 -- # return 0 00:05:55.763 13:29:34 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 true 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 Dev_2 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:05:55.763 13:29:34 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:05:55.763 13:29:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:55.763 13:29:34 -- common/autotest_common.sh@889 -- # local i 00:05:55.763 13:29:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:55.763 13:29:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:55.763 13:29:34 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.763 13:29:34 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:05:55.763 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.763 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.763 [ 00:05:55.763 { 00:05:55.763 "name": "Dev_2", 00:05:55.763 "aliases": [ 00:05:55.763 "7295c257-3ec0-11ef-b9c4-5b09e08d4792" 00:05:55.763 ], 00:05:55.763 "product_name": "Malloc disk", 00:05:55.763 "block_size": 512, 00:05:55.763 "num_blocks": 262144, 00:05:55.763 "uuid": "7295c257-3ec0-11ef-b9c4-5b09e08d4792", 00:05:55.763 "assigned_rate_limits": { 00:05:55.763 "rw_ios_per_sec": 0, 00:05:55.763 "rw_mbytes_per_sec": 0, 00:05:55.764 "r_mbytes_per_sec": 0, 00:05:55.764 "w_mbytes_per_sec": 0 00:05:55.764 }, 00:05:55.764 "claimed": false, 00:05:55.764 "zoned": false, 00:05:55.764 "supported_io_types": { 00:05:55.764 "read": true, 00:05:55.764 "write": true, 00:05:55.764 "unmap": true, 00:05:55.764 "write_zeroes": true, 00:05:55.764 "flush": true, 00:05:55.764 "reset": true, 00:05:55.764 "compare": false, 00:05:55.764 "compare_and_write": false, 00:05:55.764 "abort": true, 00:05:55.764 "nvme_admin": false, 00:05:55.764 "nvme_io": false 00:05:55.764 }, 00:05:55.764 "memory_domains": [ 00:05:55.764 { 00:05:55.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.764 "dma_device_type": 2 00:05:55.764 } 00:05:55.764 ], 00:05:55.764 "driver_specific": {} 00:05:55.764 } 00:05:55.764 ] 00:05:55.764 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.764 13:29:34 -- common/autotest_common.sh@895 -- # return 0 00:05:55.764 13:29:34 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:05:55.764 13:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.764 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.764 13:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.764 13:29:34 -- bdev/blockdev.sh@513 -- # NOT wait 47441 00:05:55.764 13:29:34 -- common/autotest_common.sh@640 -- # local es=0 00:05:55.764 13:29:34 -- bdev/blockdev.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:05:55.764 13:29:34 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 47441 00:05:55.764 13:29:34 -- common/autotest_common.sh@628 -- # local arg=wait 00:05:55.764 13:29:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.764 13:29:34 -- common/autotest_common.sh@632 -- # type -t wait 00:05:55.764 13:29:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.764 13:29:34 -- common/autotest_common.sh@643 -- # wait 47441 00:05:55.764 Running I/O for 5 seconds... 00:05:55.764 task offset: 68392 on job bdev=EE_Dev_1 fails 00:05:55.764 00:05:55.764 Latency(us) 00:05:55.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:55.764 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:05:55.764 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:05:55.764 EE_Dev_1 : 0.00 234042.55 914.23 53191.49 0.00 47.20 20.19 87.02 00:05:55.764 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:05:55.764 Dev_2 : 0.00 278260.87 1086.96 0.00 0.00 27.94 19.41 42.17 00:05:55.764 =================================================================================================================== 00:05:55.764 Total : 512303.42 2001.19 53191.49 0.00 36.76 19.41 87.02 00:05:55.764 [2024-07-10 13:29:35.037077] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.764 request: 00:05:55.764 { 00:05:55.764 "method": "perform_tests", 00:05:55.764 "req_id": 1 00:05:55.764 } 00:05:55.764 Got JSON-RPC error response 00:05:55.764 response: 00:05:55.764 { 00:05:55.764 "code": -32603, 00:05:55.764 "message": "bdevperf failed with error Operation not permitted" 00:05:55.764 } 00:05:56.041 13:29:35 -- common/autotest_common.sh@643 -- # es=255 00:05:56.041 13:29:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:56.041 13:29:35 -- common/autotest_common.sh@652 -- # es=127 00:05:56.041 13:29:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:56.041 13:29:35 -- common/autotest_common.sh@660 -- # es=1 00:05:56.041 13:29:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:56.041 00:05:56.041 real 0m8.789s 00:05:56.041 user 0m8.798s 00:05:56.041 sys 0m1.092s 00:05:56.041 13:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.041 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.041 ************************************ 00:05:56.041 END TEST bdev_error 00:05:56.041 ************************************ 00:05:56.041 13:29:35 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:05:56.041 13:29:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:56.041 13:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.041 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.041 ************************************ 00:05:56.041 START TEST bdev_stat 00:05:56.041 ************************************ 00:05:56.041 13:29:35 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:05:56.041 13:29:35 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:05:56.041 13:29:35 -- bdev/blockdev.sh@594 -- # STAT_PID=47464 00:05:56.041 Process Bdev IO statistics testing pid: 47464 00:05:56.041 13:29:35 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 47464' 00:05:56.041 13:29:35 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:05:56.041 13:29:35 -- bdev/blockdev.sh@593 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:05:56.041 13:29:35 -- bdev/blockdev.sh@597 -- # waitforlisten 47464 00:05:56.041 13:29:35 -- common/autotest_common.sh@819 -- # '[' -z 47464 ']' 00:05:56.041 13:29:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.041 13:29:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.041 13:29:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.041 13:29:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.041 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.041 [2024-07-10 13:29:35.276953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.041 [2024-07-10 13:29:35.277320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:56.607 EAL: TSC is not safe to use in SMP mode 00:05:56.607 EAL: TSC is not invariant 00:05:56.607 [2024-07-10 13:29:35.709514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.607 [2024-07-10 13:29:35.799018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.608 [2024-07-10 13:29:35.799017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.865 13:29:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.866 13:29:36 -- common/autotest_common.sh@852 -- # return 0 00:05:56.866 13:29:36 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:05:56.866 13:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.866 13:29:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.124 Malloc_STAT 00:05:57.124 13:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.124 13:29:36 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:05:57.124 13:29:36 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:05:57.124 13:29:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:57.124 13:29:36 -- common/autotest_common.sh@889 -- # local i 00:05:57.124 13:29:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:57.124 13:29:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:57.124 13:29:36 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:57.124 13:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.124 13:29:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.124 13:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.124 13:29:36 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:05:57.124 13:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.124 13:29:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.124 [ 00:05:57.124 { 00:05:57.124 "name": "Malloc_STAT", 00:05:57.124 "aliases": [ 00:05:57.124 "73603dda-3ec0-11ef-b9c4-5b09e08d4792" 00:05:57.124 ], 00:05:57.124 "product_name": "Malloc disk", 00:05:57.124 "block_size": 512, 00:05:57.124 "num_blocks": 262144, 00:05:57.124 "uuid": "73603dda-3ec0-11ef-b9c4-5b09e08d4792", 00:05:57.124 "assigned_rate_limits": { 00:05:57.124 "rw_ios_per_sec": 0, 00:05:57.124 "rw_mbytes_per_sec": 0, 00:05:57.124 "r_mbytes_per_sec": 0, 00:05:57.124 "w_mbytes_per_sec": 0 00:05:57.124 }, 00:05:57.124 "claimed": false, 00:05:57.124 "zoned": false, 00:05:57.124 "supported_io_types": { 00:05:57.124 "read": true, 00:05:57.124 "write": true, 00:05:57.124 "unmap": true, 00:05:57.124 "write_zeroes": true, 00:05:57.124 "flush": true, 00:05:57.124 "reset": true, 00:05:57.124 "compare": false, 00:05:57.124 "compare_and_write": false, 00:05:57.124 "abort": true, 00:05:57.124 "nvme_admin": false, 00:05:57.124 "nvme_io": false 00:05:57.124 }, 00:05:57.124 "memory_domains": [ 00:05:57.124 { 00:05:57.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.124 "dma_device_type": 2 00:05:57.124 } 00:05:57.124 ], 00:05:57.124 "driver_specific": {} 00:05:57.124 } 00:05:57.124 ] 00:05:57.124 13:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.124 13:29:36 -- common/autotest_common.sh@895 -- # return 0 00:05:57.124 13:29:36 -- bdev/blockdev.sh@603 -- # sleep 2 00:05:57.124 13:29:36 -- bdev/blockdev.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:57.124 Running I/O for 10 seconds... 00:05:59.027 13:29:38 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:05:59.027 13:29:38 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:05:59.027 13:29:38 -- bdev/blockdev.sh@558 -- # local iostats 00:05:59.027 13:29:38 -- bdev/blockdev.sh@559 -- # local io_count1 00:05:59.027 13:29:38 -- bdev/blockdev.sh@560 -- # local io_count2 00:05:59.027 13:29:38 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:05:59.027 13:29:38 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:05:59.027 13:29:38 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:05:59.027 13:29:38 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:05:59.027 13:29:38 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:05:59.027 13:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.027 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.286 13:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.286 13:29:38 -- bdev/blockdev.sh@566 -- # iostats='{ 00:05:59.286 "tick_rate": 2294610885, 00:05:59.286 "ticks": 763929763692, 00:05:59.286 "bdevs": [ 00:05:59.286 { 00:05:59.286 "name": "Malloc_STAT", 00:05:59.286 "bytes_read": 14507086336, 00:05:59.286 "num_read_ops": 3541763, 00:05:59.286 "bytes_written": 0, 00:05:59.286 "num_write_ops": 0, 00:05:59.286 "bytes_unmapped": 0, 00:05:59.286 "num_unmap_ops": 0, 00:05:59.286 "bytes_copied": 0, 00:05:59.286 "num_copy_ops": 0, 00:05:59.286 "read_latency_ticks": 2381170050324, 00:05:59.286 "max_read_latency_ticks": 1100492, 00:05:59.286 "min_read_latency_ticks": 36662, 00:05:59.286 "write_latency_ticks": 0, 00:05:59.286 "max_write_latency_ticks": 0, 00:05:59.286 "min_write_latency_ticks": 0, 00:05:59.286 "unmap_latency_ticks": 0, 00:05:59.286 "max_unmap_latency_ticks": 0, 00:05:59.286 "min_unmap_latency_ticks": 0, 00:05:59.286 "copy_latency_ticks": 0, 00:05:59.286 "max_copy_latency_ticks": 0, 00:05:59.286 "min_copy_latency_ticks": 0, 00:05:59.286 "io_error": {} 00:05:59.286 } 00:05:59.286 ] 00:05:59.286 }' 00:05:59.286 13:29:38 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:05:59.286 13:29:38 -- bdev/blockdev.sh@567 -- # io_count1=3541763 00:05:59.286 13:29:38 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:05:59.286 13:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.286 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.286 13:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.286 13:29:38 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:05:59.286 "tick_rate": 2294610885, 00:05:59.286 "ticks": 764005791010, 00:05:59.286 "name": "Malloc_STAT", 00:05:59.286 "channels": [ 00:05:59.286 { 00:05:59.286 "thread_id": 2, 00:05:59.286 "bytes_read": 7328497664, 00:05:59.286 "num_read_ops": 1789184, 00:05:59.286 "bytes_written": 0, 00:05:59.286 "num_write_ops": 0, 00:05:59.286 "bytes_unmapped": 0, 00:05:59.286 "num_unmap_ops": 0, 00:05:59.286 "bytes_copied": 0, 00:05:59.286 "num_copy_ops": 0, 00:05:59.286 "read_latency_ticks": 1209976370714, 00:05:59.286 "max_read_latency_ticks": 1067134, 00:05:59.286 "min_read_latency_ticks": 624322, 00:05:59.286 "write_latency_ticks": 0, 00:05:59.286 "max_write_latency_ticks": 0, 00:05:59.286 "min_write_latency_ticks": 0, 00:05:59.286 "unmap_latency_ticks": 0, 00:05:59.286 "max_unmap_latency_ticks": 0, 00:05:59.286 "min_unmap_latency_ticks": 0, 00:05:59.286 "copy_latency_ticks": 0, 00:05:59.286 "max_copy_latency_ticks": 0, 00:05:59.286 "min_copy_latency_ticks": 0 00:05:59.286 }, 00:05:59.286 { 00:05:59.286 "thread_id": 3, 00:05:59.286 "bytes_read": 7393509376, 00:05:59.286 "num_read_ops": 1805056, 00:05:59.286 "bytes_written": 0, 00:05:59.286 "num_write_ops": 0, 00:05:59.286 "bytes_unmapped": 0, 00:05:59.286 "num_unmap_ops": 0, 00:05:59.286 "bytes_copied": 0, 00:05:59.286 "num_copy_ops": 0, 00:05:59.286 "read_latency_ticks": 1210071296390, 00:05:59.286 "max_read_latency_ticks": 1100492, 00:05:59.286 "min_read_latency_ticks": 620404, 00:05:59.286 "write_latency_ticks": 0, 00:05:59.286 "max_write_latency_ticks": 0, 00:05:59.286 "min_write_latency_ticks": 0, 00:05:59.286 "unmap_latency_ticks": 0, 00:05:59.286 "max_unmap_latency_ticks": 0, 00:05:59.286 "min_unmap_latency_ticks": 0, 00:05:59.286 "copy_latency_ticks": 0, 00:05:59.286 "max_copy_latency_ticks": 0, 00:05:59.286 "min_copy_latency_ticks": 0 00:05:59.286 } 00:05:59.286 ] 00:05:59.286 }' 00:05:59.286 13:29:38 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:05:59.286 13:29:38 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=1789184 00:05:59.286 13:29:38 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=1789184 00:05:59.286 13:29:38 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:05:59.286 13:29:38 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=1805056 00:05:59.286 13:29:38 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=3594240 00:05:59.286 13:29:38 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:05:59.286 13:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.286 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.286 13:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.286 13:29:38 -- bdev/blockdev.sh@575 -- # iostats='{ 00:05:59.286 "tick_rate": 2294610885, 00:05:59.286 "ticks": 764119844888, 00:05:59.286 "bdevs": [ 00:05:59.286 { 00:05:59.286 "name": "Malloc_STAT", 00:05:59.286 "bytes_read": 15056540160, 00:05:59.286 "num_read_ops": 3675907, 00:05:59.286 "bytes_written": 0, 00:05:59.286 "num_write_ops": 0, 00:05:59.286 "bytes_unmapped": 0, 00:05:59.287 "num_unmap_ops": 0, 00:05:59.287 "bytes_copied": 0, 00:05:59.287 "num_copy_ops": 0, 00:05:59.287 "read_latency_ticks": 2478370362228, 00:05:59.287 "max_read_latency_ticks": 1100492, 00:05:59.287 "min_read_latency_ticks": 36662, 00:05:59.287 "write_latency_ticks": 0, 00:05:59.287 "max_write_latency_ticks": 0, 00:05:59.287 "min_write_latency_ticks": 0, 00:05:59.287 "unmap_latency_ticks": 0, 00:05:59.287 "max_unmap_latency_ticks": 0, 00:05:59.287 "min_unmap_latency_ticks": 0, 00:05:59.287 "copy_latency_ticks": 0, 00:05:59.287 "max_copy_latency_ticks": 0, 00:05:59.287 "min_copy_latency_ticks": 0, 00:05:59.287 "io_error": {} 00:05:59.287 } 00:05:59.287 ] 00:05:59.287 }' 00:05:59.287 13:29:38 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:05:59.287 13:29:38 -- bdev/blockdev.sh@576 -- # io_count2=3675907 00:05:59.287 13:29:38 -- bdev/blockdev.sh@581 -- # '[' 3594240 -lt 3541763 ']' 00:05:59.287 13:29:38 -- bdev/blockdev.sh@581 -- # '[' 3594240 -gt 3675907 ']' 00:05:59.287 13:29:38 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:05:59.287 13:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.287 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.287 00:05:59.287 Latency(us) 00:05:59.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.287 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:05:59.287 Malloc_STAT : 2.14 866635.43 3385.29 0.00 0.00 295.19 46.86 467.68 00:05:59.287 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:05:59.287 Malloc_STAT : 2.15 874320.12 3415.31 0.00 0.00 292.60 54.89 481.96 00:05:59.287 =================================================================================================================== 00:05:59.287 Total : 1740955.55 6800.61 0.00 0.00 293.89 46.86 481.96 00:05:59.287 0 00:05:59.287 13:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.287 13:29:38 -- bdev/blockdev.sh@607 -- # killprocess 47464 00:05:59.287 13:29:38 -- common/autotest_common.sh@926 -- # '[' -z 47464 ']' 00:05:59.287 13:29:38 -- common/autotest_common.sh@930 -- # kill -0 47464 00:05:59.287 13:29:38 -- common/autotest_common.sh@931 -- # uname 00:05:59.287 13:29:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:59.287 13:29:38 -- common/autotest_common.sh@934 -- # ps -c -o command 47464 00:05:59.287 13:29:38 -- common/autotest_common.sh@934 -- # tail -1 00:05:59.287 13:29:38 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:05:59.287 13:29:38 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:05:59.287 killing process with pid 47464 00:05:59.287 13:29:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47464' 00:05:59.287 13:29:38 -- common/autotest_common.sh@945 -- # kill 47464 00:05:59.287 Received shutdown signal, test time was about 2.181815 seconds 00:05:59.287 00:05:59.287 Latency(us) 00:05:59.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.287 =================================================================================================================== 00:05:59.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:59.287 13:29:38 -- common/autotest_common.sh@950 -- # wait 47464 00:05:59.545 13:29:38 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:05:59.545 00:05:59.545 real 0m3.402s 00:05:59.545 user 0m6.243s 00:05:59.545 sys 0m0.558s 00:05:59.545 13:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.545 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.545 ************************************ 00:05:59.545 END TEST bdev_stat 00:05:59.545 ************************************ 00:05:59.545 13:29:38 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:05:59.545 13:29:38 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:05:59.545 13:29:38 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:05:59.545 13:29:38 -- bdev/blockdev.sh@809 -- # cleanup 00:05:59.545 13:29:38 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:05:59.545 13:29:38 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:59.545 13:29:38 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:05:59.545 13:29:38 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:05:59.545 13:29:38 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:05:59.545 13:29:38 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:05:59.545 00:05:59.545 real 1m29.645s 00:05:59.545 user 4m27.142s 00:05:59.545 sys 0m24.902s 00:05:59.545 13:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.545 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.545 ************************************ 00:05:59.545 END TEST blockdev_general 00:05:59.545 ************************************ 00:05:59.545 13:29:38 -- spdk/autotest.sh@196 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:59.545 13:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.545 13:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.545 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.545 ************************************ 00:05:59.545 START TEST bdev_raid 00:05:59.545 ************************************ 00:05:59.545 13:29:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:59.803 * Looking for test storage... 00:05:59.803 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:59.803 13:29:38 -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@716 -- # uname -s 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@716 -- # '[' FreeBSD = Linux ']' 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:05:59.803 13:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.803 13:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.803 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.803 ************************************ 00:05:59.803 START TEST raid0_resize_test 00:05:59.803 ************************************ 00:05:59.803 13:29:38 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@301 -- # raid_pid=47551 00:05:59.803 Process raid pid: 47551 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 47551' 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@303 -- # waitforlisten 47551 /var/tmp/spdk-raid.sock 00:05:59.803 13:29:38 -- bdev/bdev_raid.sh@300 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:05:59.803 13:29:38 -- common/autotest_common.sh@819 -- # '[' -z 47551 ']' 00:05:59.803 13:29:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:05:59.803 13:29:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:05:59.803 13:29:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:05:59.803 13:29:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.803 13:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.803 [2024-07-10 13:29:38.966577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.803 [2024-07-10 13:29:38.966905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:00.063 EAL: TSC is not safe to use in SMP mode 00:06:00.063 EAL: TSC is not invariant 00:06:00.063 [2024-07-10 13:29:39.405564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.324 [2024-07-10 13:29:39.482800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.324 [2024-07-10 13:29:39.483214] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:00.324 [2024-07-10 13:29:39.483228] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:00.583 13:29:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.583 13:29:39 -- common/autotest_common.sh@852 -- # return 0 00:06:00.583 13:29:39 -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:00.841 Base_1 00:06:00.841 13:29:40 -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:01.099 Base_2 00:06:01.099 13:29:40 -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:01.357 [2024-07-10 13:29:40.454131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:01.357 [2024-07-10 13:29:40.454581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:01.357 [2024-07-10 13:29:40.454607] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b492a00 00:06:01.357 [2024-07-10 13:29:40.454611] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:01.357 [2024-07-10 13:29:40.454642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4f5e20 00:06:01.357 [2024-07-10 13:29:40.454690] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b492a00 00:06:01.357 [2024-07-10 13:29:40.454698] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82b492a00 00:06:01.357 [2024-07-10 13:29:40.454728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:01.358 13:29:40 -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:01.358 [2024-07-10 13:29:40.682118] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:01.358 [2024-07-10 13:29:40.682137] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:01.358 true 00:06:01.358 13:29:40 -- bdev/bdev_raid.sh@314 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:01.358 13:29:40 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:06:01.616 [2024-07-10 13:29:40.846136] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:01.616 13:29:40 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:06:01.616 13:29:40 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:06:01.616 13:29:40 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:06:01.616 13:29:40 -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:01.874 [2024-07-10 13:29:41.034120] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:01.874 [2024-07-10 13:29:41.034143] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:01.874 [2024-07-10 13:29:41.034183] raid0.c: 405:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:06:01.874 [2024-07-10 13:29:41.034191] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:01.874 true 00:06:01.874 13:29:41 -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:01.874 13:29:41 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:06:02.132 [2024-07-10 13:29:41.234160] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@332 -- # killprocess 47551 00:06:02.132 13:29:41 -- common/autotest_common.sh@926 -- # '[' -z 47551 ']' 00:06:02.132 13:29:41 -- common/autotest_common.sh@930 -- # kill -0 47551 00:06:02.132 13:29:41 -- common/autotest_common.sh@931 -- # uname 00:06:02.132 13:29:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:02.132 13:29:41 -- common/autotest_common.sh@934 -- # ps -c -o command 47551 00:06:02.132 13:29:41 -- common/autotest_common.sh@934 -- # tail -1 00:06:02.132 13:29:41 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:02.132 13:29:41 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:02.132 killing process with pid 47551 00:06:02.132 13:29:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47551' 00:06:02.132 13:29:41 -- common/autotest_common.sh@945 -- # kill 47551 00:06:02.132 [2024-07-10 13:29:41.269201] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:02.132 [2024-07-10 13:29:41.269237] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:02.132 [2024-07-10 13:29:41.269251] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:02.132 [2024-07-10 13:29:41.269255] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b492a00 name Raid, state offline 00:06:02.132 13:29:41 -- common/autotest_common.sh@950 -- # wait 47551 00:06:02.132 [2024-07-10 13:29:41.269378] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:02.132 ************************************ 00:06:02.132 END TEST raid0_resize_test 00:06:02.132 ************************************ 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@334 -- # return 0 00:06:02.132 00:06:02.132 real 0m2.466s 00:06:02.132 user 0m3.485s 00:06:02.132 sys 0m0.726s 00:06:02.132 13:29:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.132 13:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:02.132 13:29:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:02.132 13:29:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.132 13:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.132 ************************************ 00:06:02.132 START TEST raid_state_function_test 00:06:02.132 ************************************ 00:06:02.132 13:29:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:06:02.132 13:29:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=47589 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47589' 00:06:02.133 Process raid pid: 47589 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:02.133 13:29:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47589 /var/tmp/spdk-raid.sock 00:06:02.133 13:29:41 -- common/autotest_common.sh@819 -- # '[' -z 47589 ']' 00:06:02.133 13:29:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:02.133 13:29:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:02.133 13:29:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:02.133 13:29:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.133 13:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.391 [2024-07-10 13:29:41.485794] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.391 [2024-07-10 13:29:41.486153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:02.650 EAL: TSC is not safe to use in SMP mode 00:06:02.650 EAL: TSC is not invariant 00:06:02.650 [2024-07-10 13:29:41.918975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.909 [2024-07-10 13:29:42.007273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.909 [2024-07-10 13:29:42.007699] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:02.909 [2024-07-10 13:29:42.007708] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:03.167 13:29:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.167 13:29:42 -- common/autotest_common.sh@852 -- # return 0 00:06:03.167 13:29:42 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:03.425 [2024-07-10 13:29:42.530642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:03.425 [2024-07-10 13:29:42.530695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:03.425 [2024-07-10 13:29:42.530699] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:03.425 [2024-07-10 13:29:42.530706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:03.425 13:29:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:03.425 13:29:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:03.425 13:29:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:03.426 "name": "Existed_Raid", 00:06:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.426 "strip_size_kb": 64, 00:06:03.426 "state": "configuring", 00:06:03.426 "raid_level": "raid0", 00:06:03.426 "superblock": false, 00:06:03.426 "num_base_bdevs": 2, 00:06:03.426 "num_base_bdevs_discovered": 0, 00:06:03.426 "num_base_bdevs_operational": 2, 00:06:03.426 "base_bdevs_list": [ 00:06:03.426 { 00:06:03.426 "name": "BaseBdev1", 00:06:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.426 "is_configured": false, 00:06:03.426 "data_offset": 0, 00:06:03.426 "data_size": 0 00:06:03.426 }, 00:06:03.426 { 00:06:03.426 "name": "BaseBdev2", 00:06:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.426 "is_configured": false, 00:06:03.426 "data_offset": 0, 00:06:03.426 "data_size": 0 00:06:03.426 } 00:06:03.426 ] 00:06:03.426 }' 00:06:03.426 13:29:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:03.426 13:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.684 13:29:43 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:03.942 [2024-07-10 13:29:43.190637] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:03.942 [2024-07-10 13:29:43.190664] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b2b500 name Existed_Raid, state configuring 00:06:03.942 13:29:43 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:04.201 [2024-07-10 13:29:43.386637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:04.201 [2024-07-10 13:29:43.386698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:04.201 [2024-07-10 13:29:43.386702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:04.201 [2024-07-10 13:29:43.386708] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:04.201 13:29:43 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:04.459 [2024-07-10 13:29:43.583455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:04.459 BaseBdev1 00:06:04.459 13:29:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:04.459 13:29:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:04.459 13:29:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:04.459 13:29:43 -- common/autotest_common.sh@889 -- # local i 00:06:04.459 13:29:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:04.459 13:29:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:04.459 13:29:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:04.459 13:29:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:04.719 [ 00:06:04.719 { 00:06:04.719 "name": "BaseBdev1", 00:06:04.719 "aliases": [ 00:06:04.719 "77c525ff-3ec0-11ef-b9c4-5b09e08d4792" 00:06:04.719 ], 00:06:04.719 "product_name": "Malloc disk", 00:06:04.719 "block_size": 512, 00:06:04.719 "num_blocks": 65536, 00:06:04.719 "uuid": "77c525ff-3ec0-11ef-b9c4-5b09e08d4792", 00:06:04.719 "assigned_rate_limits": { 00:06:04.719 "rw_ios_per_sec": 0, 00:06:04.719 "rw_mbytes_per_sec": 0, 00:06:04.719 "r_mbytes_per_sec": 0, 00:06:04.719 "w_mbytes_per_sec": 0 00:06:04.719 }, 00:06:04.719 "claimed": true, 00:06:04.719 "claim_type": "exclusive_write", 00:06:04.719 "zoned": false, 00:06:04.719 "supported_io_types": { 00:06:04.719 "read": true, 00:06:04.719 "write": true, 00:06:04.719 "unmap": true, 00:06:04.719 "write_zeroes": true, 00:06:04.719 "flush": true, 00:06:04.719 "reset": true, 00:06:04.719 "compare": false, 00:06:04.719 "compare_and_write": false, 00:06:04.719 "abort": true, 00:06:04.719 "nvme_admin": false, 00:06:04.719 "nvme_io": false 00:06:04.719 }, 00:06:04.719 "memory_domains": [ 00:06:04.719 { 00:06:04.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.719 "dma_device_type": 2 00:06:04.719 } 00:06:04.719 ], 00:06:04.719 "driver_specific": {} 00:06:04.719 } 00:06:04.719 ] 00:06:04.719 13:29:43 -- common/autotest_common.sh@895 -- # return 0 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:04.719 13:29:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:04.978 13:29:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:04.978 "name": "Existed_Raid", 00:06:04.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:04.978 "strip_size_kb": 64, 00:06:04.978 "state": "configuring", 00:06:04.978 "raid_level": "raid0", 00:06:04.978 "superblock": false, 00:06:04.978 "num_base_bdevs": 2, 00:06:04.978 "num_base_bdevs_discovered": 1, 00:06:04.978 "num_base_bdevs_operational": 2, 00:06:04.978 "base_bdevs_list": [ 00:06:04.978 { 00:06:04.978 "name": "BaseBdev1", 00:06:04.978 "uuid": "77c525ff-3ec0-11ef-b9c4-5b09e08d4792", 00:06:04.978 "is_configured": true, 00:06:04.978 "data_offset": 0, 00:06:04.978 "data_size": 65536 00:06:04.978 }, 00:06:04.978 { 00:06:04.978 "name": "BaseBdev2", 00:06:04.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:04.978 "is_configured": false, 00:06:04.978 "data_offset": 0, 00:06:04.978 "data_size": 0 00:06:04.978 } 00:06:04.978 ] 00:06:04.978 }' 00:06:04.978 13:29:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:04.978 13:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:05.236 13:29:44 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:05.494 [2024-07-10 13:29:44.594676] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:05.494 [2024-07-10 13:29:44.594708] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b2b500 name Existed_Raid, state configuring 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:05.494 [2024-07-10 13:29:44.786670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:05.494 [2024-07-10 13:29:44.787296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:05.494 [2024-07-10 13:29:44.787339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:05.494 13:29:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:05.753 13:29:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:05.753 "name": "Existed_Raid", 00:06:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:05.753 "strip_size_kb": 64, 00:06:05.753 "state": "configuring", 00:06:05.753 "raid_level": "raid0", 00:06:05.753 "superblock": false, 00:06:05.753 "num_base_bdevs": 2, 00:06:05.753 "num_base_bdevs_discovered": 1, 00:06:05.753 "num_base_bdevs_operational": 2, 00:06:05.753 "base_bdevs_list": [ 00:06:05.753 { 00:06:05.753 "name": "BaseBdev1", 00:06:05.753 "uuid": "77c525ff-3ec0-11ef-b9c4-5b09e08d4792", 00:06:05.753 "is_configured": true, 00:06:05.753 "data_offset": 0, 00:06:05.753 "data_size": 65536 00:06:05.753 }, 00:06:05.753 { 00:06:05.753 "name": "BaseBdev2", 00:06:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:05.753 "is_configured": false, 00:06:05.753 "data_offset": 0, 00:06:05.753 "data_size": 0 00:06:05.753 } 00:06:05.753 ] 00:06:05.753 }' 00:06:05.753 13:29:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:05.753 13:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.011 13:29:45 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:06.271 [2024-07-10 13:29:45.430806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:06.271 [2024-07-10 13:29:45.430833] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829b2ba00 00:06:06.271 [2024-07-10 13:29:45.430836] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:06.271 [2024-07-10 13:29:45.430852] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829b8eec0 00:06:06.271 [2024-07-10 13:29:45.430919] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829b2ba00 00:06:06.271 [2024-07-10 13:29:45.430922] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829b2ba00 00:06:06.271 [2024-07-10 13:29:45.430949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:06.271 BaseBdev2 00:06:06.271 13:29:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:06.271 13:29:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:06.271 13:29:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:06.271 13:29:45 -- common/autotest_common.sh@889 -- # local i 00:06:06.271 13:29:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:06.271 13:29:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:06.271 13:29:45 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:06.552 13:29:45 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:06.552 [ 00:06:06.552 { 00:06:06.552 "name": "BaseBdev2", 00:06:06.552 "aliases": [ 00:06:06.552 "78df22bd-3ec0-11ef-b9c4-5b09e08d4792" 00:06:06.552 ], 00:06:06.552 "product_name": "Malloc disk", 00:06:06.552 "block_size": 512, 00:06:06.552 "num_blocks": 65536, 00:06:06.552 "uuid": "78df22bd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:06.552 "assigned_rate_limits": { 00:06:06.552 "rw_ios_per_sec": 0, 00:06:06.552 "rw_mbytes_per_sec": 0, 00:06:06.552 "r_mbytes_per_sec": 0, 00:06:06.552 "w_mbytes_per_sec": 0 00:06:06.552 }, 00:06:06.552 "claimed": true, 00:06:06.552 "claim_type": "exclusive_write", 00:06:06.552 "zoned": false, 00:06:06.552 "supported_io_types": { 00:06:06.552 "read": true, 00:06:06.552 "write": true, 00:06:06.552 "unmap": true, 00:06:06.552 "write_zeroes": true, 00:06:06.552 "flush": true, 00:06:06.552 "reset": true, 00:06:06.552 "compare": false, 00:06:06.552 "compare_and_write": false, 00:06:06.552 "abort": true, 00:06:06.552 "nvme_admin": false, 00:06:06.552 "nvme_io": false 00:06:06.552 }, 00:06:06.552 "memory_domains": [ 00:06:06.552 { 00:06:06.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.552 "dma_device_type": 2 00:06:06.552 } 00:06:06.552 ], 00:06:06.552 "driver_specific": {} 00:06:06.552 } 00:06:06.552 ] 00:06:06.552 13:29:45 -- common/autotest_common.sh@895 -- # return 0 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:06.552 13:29:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:06.811 13:29:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:06.811 "name": "Existed_Raid", 00:06:06.811 "uuid": "78df2835-3ec0-11ef-b9c4-5b09e08d4792", 00:06:06.811 "strip_size_kb": 64, 00:06:06.811 "state": "online", 00:06:06.811 "raid_level": "raid0", 00:06:06.811 "superblock": false, 00:06:06.811 "num_base_bdevs": 2, 00:06:06.811 "num_base_bdevs_discovered": 2, 00:06:06.811 "num_base_bdevs_operational": 2, 00:06:06.811 "base_bdevs_list": [ 00:06:06.811 { 00:06:06.811 "name": "BaseBdev1", 00:06:06.811 "uuid": "77c525ff-3ec0-11ef-b9c4-5b09e08d4792", 00:06:06.811 "is_configured": true, 00:06:06.811 "data_offset": 0, 00:06:06.811 "data_size": 65536 00:06:06.811 }, 00:06:06.811 { 00:06:06.811 "name": "BaseBdev2", 00:06:06.811 "uuid": "78df22bd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:06.811 "is_configured": true, 00:06:06.811 "data_offset": 0, 00:06:06.811 "data_size": 65536 00:06:06.811 } 00:06:06.811 ] 00:06:06.811 }' 00:06:06.811 13:29:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:06.811 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.070 13:29:46 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:07.330 [2024-07-10 13:29:46.454741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:07.330 [2024-07-10 13:29:46.454766] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:07.330 [2024-07-10 13:29:46.454778] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:07.330 "name": "Existed_Raid", 00:06:07.330 "uuid": "78df2835-3ec0-11ef-b9c4-5b09e08d4792", 00:06:07.330 "strip_size_kb": 64, 00:06:07.330 "state": "offline", 00:06:07.330 "raid_level": "raid0", 00:06:07.330 "superblock": false, 00:06:07.330 "num_base_bdevs": 2, 00:06:07.330 "num_base_bdevs_discovered": 1, 00:06:07.330 "num_base_bdevs_operational": 1, 00:06:07.330 "base_bdevs_list": [ 00:06:07.330 { 00:06:07.330 "name": null, 00:06:07.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:07.330 "is_configured": false, 00:06:07.330 "data_offset": 0, 00:06:07.330 "data_size": 65536 00:06:07.330 }, 00:06:07.330 { 00:06:07.330 "name": "BaseBdev2", 00:06:07.330 "uuid": "78df22bd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:07.330 "is_configured": true, 00:06:07.330 "data_offset": 0, 00:06:07.330 "data_size": 65536 00:06:07.330 } 00:06:07.330 ] 00:06:07.330 }' 00:06:07.330 13:29:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:07.330 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.898 13:29:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:07.898 13:29:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:07.898 13:29:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:07.898 13:29:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:07.898 13:29:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:07.898 13:29:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:07.898 13:29:47 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:08.157 [2024-07-10 13:29:47.311471] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:08.157 [2024-07-10 13:29:47.311499] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b2ba00 name Existed_Raid, state offline 00:06:08.157 13:29:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:08.157 13:29:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:08.157 13:29:47 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:08.157 13:29:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@287 -- # killprocess 47589 00:06:08.417 13:29:47 -- common/autotest_common.sh@926 -- # '[' -z 47589 ']' 00:06:08.417 13:29:47 -- common/autotest_common.sh@930 -- # kill -0 47589 00:06:08.417 13:29:47 -- common/autotest_common.sh@931 -- # uname 00:06:08.417 13:29:47 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:08.417 13:29:47 -- common/autotest_common.sh@934 -- # ps -c -o command 47589 00:06:08.417 13:29:47 -- common/autotest_common.sh@934 -- # tail -1 00:06:08.417 13:29:47 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:08.417 13:29:47 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:08.417 killing process with pid 47589 00:06:08.417 13:29:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47589' 00:06:08.417 13:29:47 -- common/autotest_common.sh@945 -- # kill 47589 00:06:08.417 [2024-07-10 13:29:47.527601] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:08.417 [2024-07-10 13:29:47.527637] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:08.417 13:29:47 -- common/autotest_common.sh@950 -- # wait 47589 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:08.417 00:06:08.417 real 0m6.210s 00:06:08.417 user 0m10.524s 00:06:08.417 sys 0m1.252s 00:06:08.417 13:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.417 13:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.417 ************************************ 00:06:08.417 END TEST raid_state_function_test 00:06:08.417 ************************************ 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:08.417 13:29:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:08.417 13:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.417 13:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.417 ************************************ 00:06:08.417 START TEST raid_state_function_test_sb 00:06:08.417 ************************************ 00:06:08.417 13:29:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=47785 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47785' 00:06:08.417 Process raid pid: 47785 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:08.417 13:29:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47785 /var/tmp/spdk-raid.sock 00:06:08.418 13:29:47 -- common/autotest_common.sh@819 -- # '[' -z 47785 ']' 00:06:08.418 13:29:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:08.418 13:29:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:08.418 13:29:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:08.418 13:29:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.418 13:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.418 [2024-07-10 13:29:47.752048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.418 [2024-07-10 13:29:47.752418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:08.987 EAL: TSC is not safe to use in SMP mode 00:06:08.987 EAL: TSC is not invariant 00:06:08.987 [2024-07-10 13:29:48.185904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.987 [2024-07-10 13:29:48.276250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.987 [2024-07-10 13:29:48.276712] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:08.987 [2024-07-10 13:29:48.276722] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:09.556 13:29:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.556 13:29:48 -- common/autotest_common.sh@852 -- # return 0 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:09.556 [2024-07-10 13:29:48.831726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:09.556 [2024-07-10 13:29:48.831783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:09.556 [2024-07-10 13:29:48.831787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:09.556 [2024-07-10 13:29:48.831794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:09.556 13:29:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:09.815 13:29:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:09.815 "name": "Existed_Raid", 00:06:09.815 "uuid": "7ae6171a-3ec0-11ef-b9c4-5b09e08d4792", 00:06:09.815 "strip_size_kb": 64, 00:06:09.815 "state": "configuring", 00:06:09.815 "raid_level": "raid0", 00:06:09.815 "superblock": true, 00:06:09.815 "num_base_bdevs": 2, 00:06:09.815 "num_base_bdevs_discovered": 0, 00:06:09.815 "num_base_bdevs_operational": 2, 00:06:09.815 "base_bdevs_list": [ 00:06:09.815 { 00:06:09.815 "name": "BaseBdev1", 00:06:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:09.815 "is_configured": false, 00:06:09.815 "data_offset": 0, 00:06:09.815 "data_size": 0 00:06:09.815 }, 00:06:09.815 { 00:06:09.815 "name": "BaseBdev2", 00:06:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:09.815 "is_configured": false, 00:06:09.815 "data_offset": 0, 00:06:09.815 "data_size": 0 00:06:09.815 } 00:06:09.815 ] 00:06:09.815 }' 00:06:09.815 13:29:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:09.815 13:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 13:29:49 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:10.335 [2024-07-10 13:29:49.487725] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:10.335 [2024-07-10 13:29:49.487754] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82caef500 name Existed_Raid, state configuring 00:06:10.335 13:29:49 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:10.335 [2024-07-10 13:29:49.655751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:10.335 [2024-07-10 13:29:49.655802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:10.335 [2024-07-10 13:29:49.655806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:10.335 [2024-07-10 13:29:49.655812] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:10.335 13:29:49 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:10.594 [2024-07-10 13:29:49.848564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:10.594 BaseBdev1 00:06:10.594 13:29:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:10.594 13:29:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:10.594 13:29:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:10.594 13:29:49 -- common/autotest_common.sh@889 -- # local i 00:06:10.594 13:29:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:10.594 13:29:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:10.594 13:29:49 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:10.853 13:29:50 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:11.113 [ 00:06:11.113 { 00:06:11.113 "name": "BaseBdev1", 00:06:11.113 "aliases": [ 00:06:11.113 "7b812038-3ec0-11ef-b9c4-5b09e08d4792" 00:06:11.113 ], 00:06:11.113 "product_name": "Malloc disk", 00:06:11.113 "block_size": 512, 00:06:11.113 "num_blocks": 65536, 00:06:11.113 "uuid": "7b812038-3ec0-11ef-b9c4-5b09e08d4792", 00:06:11.113 "assigned_rate_limits": { 00:06:11.113 "rw_ios_per_sec": 0, 00:06:11.113 "rw_mbytes_per_sec": 0, 00:06:11.113 "r_mbytes_per_sec": 0, 00:06:11.113 "w_mbytes_per_sec": 0 00:06:11.113 }, 00:06:11.113 "claimed": true, 00:06:11.113 "claim_type": "exclusive_write", 00:06:11.113 "zoned": false, 00:06:11.113 "supported_io_types": { 00:06:11.113 "read": true, 00:06:11.113 "write": true, 00:06:11.113 "unmap": true, 00:06:11.113 "write_zeroes": true, 00:06:11.113 "flush": true, 00:06:11.113 "reset": true, 00:06:11.113 "compare": false, 00:06:11.113 "compare_and_write": false, 00:06:11.113 "abort": true, 00:06:11.113 "nvme_admin": false, 00:06:11.113 "nvme_io": false 00:06:11.113 }, 00:06:11.113 "memory_domains": [ 00:06:11.113 { 00:06:11.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.113 "dma_device_type": 2 00:06:11.113 } 00:06:11.113 ], 00:06:11.113 "driver_specific": {} 00:06:11.113 } 00:06:11.113 ] 00:06:11.113 13:29:50 -- common/autotest_common.sh@895 -- # return 0 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:11.113 13:29:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:11.113 "name": "Existed_Raid", 00:06:11.113 "uuid": "7b63d391-3ec0-11ef-b9c4-5b09e08d4792", 00:06:11.113 "strip_size_kb": 64, 00:06:11.113 "state": "configuring", 00:06:11.113 "raid_level": "raid0", 00:06:11.113 "superblock": true, 00:06:11.113 "num_base_bdevs": 2, 00:06:11.113 "num_base_bdevs_discovered": 1, 00:06:11.113 "num_base_bdevs_operational": 2, 00:06:11.114 "base_bdevs_list": [ 00:06:11.114 { 00:06:11.114 "name": "BaseBdev1", 00:06:11.114 "uuid": "7b812038-3ec0-11ef-b9c4-5b09e08d4792", 00:06:11.114 "is_configured": true, 00:06:11.114 "data_offset": 2048, 00:06:11.114 "data_size": 63488 00:06:11.114 }, 00:06:11.114 { 00:06:11.114 "name": "BaseBdev2", 00:06:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:11.114 "is_configured": false, 00:06:11.114 "data_offset": 0, 00:06:11.114 "data_size": 0 00:06:11.114 } 00:06:11.114 ] 00:06:11.114 }' 00:06:11.114 13:29:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:11.114 13:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 13:29:50 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:11.631 [2024-07-10 13:29:50.871811] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:11.631 [2024-07-10 13:29:50.871846] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82caef500 name Existed_Raid, state configuring 00:06:11.631 13:29:50 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:06:11.631 13:29:50 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:11.890 13:29:51 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:12.149 BaseBdev1 00:06:12.149 13:29:51 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:06:12.149 13:29:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:12.149 13:29:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:12.149 13:29:51 -- common/autotest_common.sh@889 -- # local i 00:06:12.149 13:29:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:12.149 13:29:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:12.149 13:29:51 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:12.149 13:29:51 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:12.407 [ 00:06:12.408 { 00:06:12.408 "name": "BaseBdev1", 00:06:12.408 "aliases": [ 00:06:12.408 "7c594d3c-3ec0-11ef-b9c4-5b09e08d4792" 00:06:12.408 ], 00:06:12.408 "product_name": "Malloc disk", 00:06:12.408 "block_size": 512, 00:06:12.408 "num_blocks": 65536, 00:06:12.408 "uuid": "7c594d3c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:12.408 "assigned_rate_limits": { 00:06:12.408 "rw_ios_per_sec": 0, 00:06:12.408 "rw_mbytes_per_sec": 0, 00:06:12.408 "r_mbytes_per_sec": 0, 00:06:12.408 "w_mbytes_per_sec": 0 00:06:12.408 }, 00:06:12.408 "claimed": false, 00:06:12.408 "zoned": false, 00:06:12.408 "supported_io_types": { 00:06:12.408 "read": true, 00:06:12.408 "write": true, 00:06:12.408 "unmap": true, 00:06:12.408 "write_zeroes": true, 00:06:12.408 "flush": true, 00:06:12.408 "reset": true, 00:06:12.408 "compare": false, 00:06:12.408 "compare_and_write": false, 00:06:12.408 "abort": true, 00:06:12.408 "nvme_admin": false, 00:06:12.408 "nvme_io": false 00:06:12.408 }, 00:06:12.408 "memory_domains": [ 00:06:12.408 { 00:06:12.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.408 "dma_device_type": 2 00:06:12.408 } 00:06:12.408 ], 00:06:12.408 "driver_specific": {} 00:06:12.408 } 00:06:12.408 ] 00:06:12.408 13:29:51 -- common/autotest_common.sh@895 -- # return 0 00:06:12.408 13:29:51 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:12.667 [2024-07-10 13:29:51.828506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:12.667 [2024-07-10 13:29:51.828958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:12.667 [2024-07-10 13:29:51.829000] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:12.667 13:29:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:12.926 13:29:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:12.926 "name": "Existed_Raid", 00:06:12.926 "uuid": "7caf5c8c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:12.926 "strip_size_kb": 64, 00:06:12.926 "state": "configuring", 00:06:12.926 "raid_level": "raid0", 00:06:12.926 "superblock": true, 00:06:12.926 "num_base_bdevs": 2, 00:06:12.926 "num_base_bdevs_discovered": 1, 00:06:12.926 "num_base_bdevs_operational": 2, 00:06:12.926 "base_bdevs_list": [ 00:06:12.926 { 00:06:12.926 "name": "BaseBdev1", 00:06:12.926 "uuid": "7c594d3c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:12.926 "is_configured": true, 00:06:12.926 "data_offset": 2048, 00:06:12.926 "data_size": 63488 00:06:12.926 }, 00:06:12.926 { 00:06:12.926 "name": "BaseBdev2", 00:06:12.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:12.926 "is_configured": false, 00:06:12.926 "data_offset": 0, 00:06:12.926 "data_size": 0 00:06:12.926 } 00:06:12.926 ] 00:06:12.926 }' 00:06:12.926 13:29:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:12.926 13:29:52 -- common/autotest_common.sh@10 -- # set +x 00:06:13.185 13:29:52 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:13.185 [2024-07-10 13:29:52.472622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:13.185 [2024-07-10 13:29:52.472700] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82caefa00 00:06:13.185 [2024-07-10 13:29:52.472705] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:13.185 [2024-07-10 13:29:52.472723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb52ec0 00:06:13.185 [2024-07-10 13:29:52.472751] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82caefa00 00:06:13.185 [2024-07-10 13:29:52.472753] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82caefa00 00:06:13.185 [2024-07-10 13:29:52.472768] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:13.185 BaseBdev2 00:06:13.185 13:29:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:13.185 13:29:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:13.185 13:29:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:13.185 13:29:52 -- common/autotest_common.sh@889 -- # local i 00:06:13.185 13:29:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:13.185 13:29:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:13.185 13:29:52 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:13.444 13:29:52 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:13.702 [ 00:06:13.702 { 00:06:13.702 "name": "BaseBdev2", 00:06:13.702 "aliases": [ 00:06:13.702 "7d11a1bc-3ec0-11ef-b9c4-5b09e08d4792" 00:06:13.702 ], 00:06:13.702 "product_name": "Malloc disk", 00:06:13.702 "block_size": 512, 00:06:13.702 "num_blocks": 65536, 00:06:13.702 "uuid": "7d11a1bc-3ec0-11ef-b9c4-5b09e08d4792", 00:06:13.702 "assigned_rate_limits": { 00:06:13.702 "rw_ios_per_sec": 0, 00:06:13.702 "rw_mbytes_per_sec": 0, 00:06:13.702 "r_mbytes_per_sec": 0, 00:06:13.702 "w_mbytes_per_sec": 0 00:06:13.702 }, 00:06:13.702 "claimed": true, 00:06:13.702 "claim_type": "exclusive_write", 00:06:13.702 "zoned": false, 00:06:13.702 "supported_io_types": { 00:06:13.702 "read": true, 00:06:13.702 "write": true, 00:06:13.702 "unmap": true, 00:06:13.702 "write_zeroes": true, 00:06:13.702 "flush": true, 00:06:13.702 "reset": true, 00:06:13.702 "compare": false, 00:06:13.702 "compare_and_write": false, 00:06:13.702 "abort": true, 00:06:13.702 "nvme_admin": false, 00:06:13.702 "nvme_io": false 00:06:13.702 }, 00:06:13.702 "memory_domains": [ 00:06:13.702 { 00:06:13.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.702 "dma_device_type": 2 00:06:13.702 } 00:06:13.702 ], 00:06:13.702 "driver_specific": {} 00:06:13.702 } 00:06:13.702 ] 00:06:13.702 13:29:52 -- common/autotest_common.sh@895 -- # return 0 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:13.702 13:29:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:13.967 13:29:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:13.967 "name": "Existed_Raid", 00:06:13.967 "uuid": "7caf5c8c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:13.967 "strip_size_kb": 64, 00:06:13.967 "state": "online", 00:06:13.967 "raid_level": "raid0", 00:06:13.967 "superblock": true, 00:06:13.967 "num_base_bdevs": 2, 00:06:13.967 "num_base_bdevs_discovered": 2, 00:06:13.967 "num_base_bdevs_operational": 2, 00:06:13.967 "base_bdevs_list": [ 00:06:13.967 { 00:06:13.967 "name": "BaseBdev1", 00:06:13.967 "uuid": "7c594d3c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:13.967 "is_configured": true, 00:06:13.967 "data_offset": 2048, 00:06:13.967 "data_size": 63488 00:06:13.967 }, 00:06:13.967 { 00:06:13.967 "name": "BaseBdev2", 00:06:13.967 "uuid": "7d11a1bc-3ec0-11ef-b9c4-5b09e08d4792", 00:06:13.967 "is_configured": true, 00:06:13.967 "data_offset": 2048, 00:06:13.967 "data_size": 63488 00:06:13.967 } 00:06:13.967 ] 00:06:13.967 }' 00:06:13.967 13:29:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:13.967 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.243 13:29:53 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:14.243 [2024-07-10 13:29:53.572565] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:14.243 [2024-07-10 13:29:53.572595] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:14.243 [2024-07-10 13:29:53.572608] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:14.503 13:29:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:14.503 "name": "Existed_Raid", 00:06:14.503 "uuid": "7caf5c8c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:14.503 "strip_size_kb": 64, 00:06:14.503 "state": "offline", 00:06:14.503 "raid_level": "raid0", 00:06:14.503 "superblock": true, 00:06:14.503 "num_base_bdevs": 2, 00:06:14.503 "num_base_bdevs_discovered": 1, 00:06:14.503 "num_base_bdevs_operational": 1, 00:06:14.503 "base_bdevs_list": [ 00:06:14.503 { 00:06:14.503 "name": null, 00:06:14.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:14.503 "is_configured": false, 00:06:14.503 "data_offset": 2048, 00:06:14.503 "data_size": 63488 00:06:14.503 }, 00:06:14.503 { 00:06:14.503 "name": "BaseBdev2", 00:06:14.503 "uuid": "7d11a1bc-3ec0-11ef-b9c4-5b09e08d4792", 00:06:14.504 "is_configured": true, 00:06:14.504 "data_offset": 2048, 00:06:14.504 "data_size": 63488 00:06:14.504 } 00:06:14.504 ] 00:06:14.504 }' 00:06:14.504 13:29:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:14.504 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.763 13:29:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:14.763 13:29:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:14.763 13:29:54 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:14.763 13:29:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:15.021 13:29:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:15.021 13:29:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:15.021 13:29:54 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:15.280 [2024-07-10 13:29:54.433422] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:15.280 [2024-07-10 13:29:54.433459] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82caefa00 name Existed_Raid, state offline 00:06:15.280 13:29:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:15.280 13:29:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:15.280 13:29:54 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:15.280 13:29:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:15.538 13:29:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:15.538 13:29:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:15.538 13:29:54 -- bdev/bdev_raid.sh@287 -- # killprocess 47785 00:06:15.538 13:29:54 -- common/autotest_common.sh@926 -- # '[' -z 47785 ']' 00:06:15.538 13:29:54 -- common/autotest_common.sh@930 -- # kill -0 47785 00:06:15.538 13:29:54 -- common/autotest_common.sh@931 -- # uname 00:06:15.538 13:29:54 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:15.538 13:29:54 -- common/autotest_common.sh@934 -- # ps -c -o command 47785 00:06:15.538 13:29:54 -- common/autotest_common.sh@934 -- # tail -1 00:06:15.539 13:29:54 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:15.539 13:29:54 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:15.539 killing process with pid 47785 00:06:15.539 13:29:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47785' 00:06:15.539 13:29:54 -- common/autotest_common.sh@945 -- # kill 47785 00:06:15.539 [2024-07-10 13:29:54.643877] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:15.539 [2024-07-10 13:29:54.643921] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:15.539 13:29:54 -- common/autotest_common.sh@950 -- # wait 47785 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:15.539 00:06:15.539 real 0m7.056s 00:06:15.539 user 0m12.015s 00:06:15.539 sys 0m1.386s 00:06:15.539 13:29:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.539 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.539 ************************************ 00:06:15.539 END TEST raid_state_function_test_sb 00:06:15.539 ************************************ 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:15.539 13:29:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:15.539 13:29:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.539 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.539 ************************************ 00:06:15.539 START TEST raid_superblock_test 00:06:15.539 ************************************ 00:06:15.539 13:29:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=47984 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 47984 /var/tmp/spdk-raid.sock 00:06:15.539 13:29:54 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:15.539 13:29:54 -- common/autotest_common.sh@819 -- # '[' -z 47984 ']' 00:06:15.539 13:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:15.539 13:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:15.539 13:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:15.539 13:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.539 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.539 [2024-07-10 13:29:54.850344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.539 [2024-07-10 13:29:54.850599] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:16.105 EAL: TSC is not safe to use in SMP mode 00:06:16.105 EAL: TSC is not invariant 00:06:16.105 [2024-07-10 13:29:55.285610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.105 [2024-07-10 13:29:55.362014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.105 [2024-07-10 13:29:55.362460] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.105 [2024-07-10 13:29:55.362469] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.673 13:29:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.673 13:29:55 -- common/autotest_common.sh@852 -- # return 0 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:16.673 malloc1 00:06:16.673 13:29:55 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:16.930 [2024-07-10 13:29:56.141498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:16.930 [2024-07-10 13:29:56.141556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.931 [2024-07-10 13:29:56.142112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e99780 00:06:16.931 [2024-07-10 13:29:56.142141] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.931 [2024-07-10 13:29:56.142876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.931 [2024-07-10 13:29:56.142908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:16.931 pt1 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:16.931 13:29:56 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:17.189 malloc2 00:06:17.189 13:29:56 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:17.189 [2024-07-10 13:29:56.525517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:17.189 [2024-07-10 13:29:56.525568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.189 [2024-07-10 13:29:56.525623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e99c80 00:06:17.189 [2024-07-10 13:29:56.525629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.189 [2024-07-10 13:29:56.526107] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.189 [2024-07-10 13:29:56.526135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:17.189 pt2 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:06:17.447 [2024-07-10 13:29:56.709532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:17.447 [2024-07-10 13:29:56.710007] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:17.447 [2024-07-10 13:29:56.710063] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e99f00 00:06:17.447 [2024-07-10 13:29:56.710069] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:17.447 [2024-07-10 13:29:56.710099] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829efce20 00:06:17.447 [2024-07-10 13:29:56.710156] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e99f00 00:06:17.447 [2024-07-10 13:29:56.710160] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e99f00 00:06:17.447 [2024-07-10 13:29:56.710181] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:17.447 13:29:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:17.707 13:29:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:17.707 "name": "raid_bdev1", 00:06:17.707 "uuid": "7f9825ac-3ec0-11ef-b9c4-5b09e08d4792", 00:06:17.707 "strip_size_kb": 64, 00:06:17.707 "state": "online", 00:06:17.707 "raid_level": "raid0", 00:06:17.707 "superblock": true, 00:06:17.707 "num_base_bdevs": 2, 00:06:17.707 "num_base_bdevs_discovered": 2, 00:06:17.707 "num_base_bdevs_operational": 2, 00:06:17.707 "base_bdevs_list": [ 00:06:17.707 { 00:06:17.707 "name": "pt1", 00:06:17.707 "uuid": "66bccc76-8a95-d254-be88-d560058c70a3", 00:06:17.707 "is_configured": true, 00:06:17.707 "data_offset": 2048, 00:06:17.707 "data_size": 63488 00:06:17.707 }, 00:06:17.707 { 00:06:17.707 "name": "pt2", 00:06:17.707 "uuid": "47bbf278-fa03-5e5a-86c2-11723ba4ab91", 00:06:17.707 "is_configured": true, 00:06:17.707 "data_offset": 2048, 00:06:17.707 "data_size": 63488 00:06:17.707 } 00:06:17.707 ] 00:06:17.707 }' 00:06:17.707 13:29:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:17.707 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 13:29:57 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:17.966 13:29:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:06:18.225 [2024-07-10 13:29:57.345571] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:18.225 13:29:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7f9825ac-3ec0-11ef-b9c4-5b09e08d4792 00:06:18.225 13:29:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 7f9825ac-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:06:18.225 13:29:57 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:18.225 [2024-07-10 13:29:57.541524] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:18.225 [2024-07-10 13:29:57.541545] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:18.225 [2024-07-10 13:29:57.541560] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:18.225 [2024-07-10 13:29:57.541586] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:18.225 [2024-07-10 13:29:57.541589] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e99f00 name raid_bdev1, state offline 00:06:18.225 13:29:57 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:18.225 13:29:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:06:18.483 13:29:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:06:18.483 13:29:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:06:18.483 13:29:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:18.483 13:29:57 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:18.754 13:29:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:18.754 13:29:57 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:19.013 13:29:58 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:19.013 13:29:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:19.013 13:29:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:06:19.013 13:29:58 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:19.013 13:29:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.013 13:29:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:19.013 13:29:58 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.013 13:29:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.013 13:29:58 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.013 13:29:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.013 13:29:58 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.013 13:29:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.013 13:29:58 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.013 13:29:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:19.013 13:29:58 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:19.270 [2024-07-10 13:29:58.477590] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:19.270 [2024-07-10 13:29:58.478067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:19.270 [2024-07-10 13:29:58.478090] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:06:19.270 [2024-07-10 13:29:58.478123] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:06:19.270 [2024-07-10 13:29:58.478131] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:19.270 [2024-07-10 13:29:58.478134] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e99c80 name raid_bdev1, state configuring 00:06:19.270 request: 00:06:19.270 { 00:06:19.270 "name": "raid_bdev1", 00:06:19.270 "raid_level": "raid0", 00:06:19.270 "base_bdevs": [ 00:06:19.270 "malloc1", 00:06:19.270 "malloc2" 00:06:19.270 ], 00:06:19.270 "superblock": false, 00:06:19.270 "strip_size_kb": 64, 00:06:19.270 "method": "bdev_raid_create", 00:06:19.270 "req_id": 1 00:06:19.270 } 00:06:19.270 Got JSON-RPC error response 00:06:19.270 response: 00:06:19.270 { 00:06:19.270 "code": -17, 00:06:19.270 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:19.270 } 00:06:19.270 13:29:58 -- common/autotest_common.sh@643 -- # es=1 00:06:19.270 13:29:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.271 13:29:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.271 13:29:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.271 13:29:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:06:19.271 13:29:58 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:19.528 13:29:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:06:19.528 13:29:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:06:19.528 13:29:58 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:19.528 [2024-07-10 13:29:58.861587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:19.528 [2024-07-10 13:29:58.861652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.528 [2024-07-10 13:29:58.861677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e99780 00:06:19.528 [2024-07-10 13:29:58.861683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.528 [2024-07-10 13:29:58.862184] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.528 [2024-07-10 13:29:58.862209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:19.528 [2024-07-10 13:29:58.862227] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:06:19.528 [2024-07-10 13:29:58.862237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:19.528 pt1 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:19.787 13:29:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:19.787 13:29:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:19.787 "name": "raid_bdev1", 00:06:19.787 "uuid": "7f9825ac-3ec0-11ef-b9c4-5b09e08d4792", 00:06:19.787 "strip_size_kb": 64, 00:06:19.787 "state": "configuring", 00:06:19.787 "raid_level": "raid0", 00:06:19.787 "superblock": true, 00:06:19.787 "num_base_bdevs": 2, 00:06:19.787 "num_base_bdevs_discovered": 1, 00:06:19.787 "num_base_bdevs_operational": 2, 00:06:19.787 "base_bdevs_list": [ 00:06:19.787 { 00:06:19.787 "name": "pt1", 00:06:19.787 "uuid": "66bccc76-8a95-d254-be88-d560058c70a3", 00:06:19.787 "is_configured": true, 00:06:19.787 "data_offset": 2048, 00:06:19.787 "data_size": 63488 00:06:19.787 }, 00:06:19.787 { 00:06:19.787 "name": null, 00:06:19.787 "uuid": "47bbf278-fa03-5e5a-86c2-11723ba4ab91", 00:06:19.787 "is_configured": false, 00:06:19.787 "data_offset": 2048, 00:06:19.787 "data_size": 63488 00:06:19.787 } 00:06:19.787 ] 00:06:19.787 }' 00:06:19.787 13:29:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:19.787 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.046 13:29:59 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:06:20.046 13:29:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:06:20.046 13:29:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:20.046 13:29:59 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:20.305 [2024-07-10 13:29:59.529611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:20.305 [2024-07-10 13:29:59.529682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.305 [2024-07-10 13:29:59.529707] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e99f00 00:06:20.305 [2024-07-10 13:29:59.529713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.305 [2024-07-10 13:29:59.529802] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.305 [2024-07-10 13:29:59.529809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:20.305 [2024-07-10 13:29:59.529827] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:06:20.305 [2024-07-10 13:29:59.529833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:20.305 [2024-07-10 13:29:59.529853] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e9a180 00:06:20.305 [2024-07-10 13:29:59.529857] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:20.305 [2024-07-10 13:29:59.529871] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829efce20 00:06:20.305 [2024-07-10 13:29:59.529906] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e9a180 00:06:20.305 [2024-07-10 13:29:59.529909] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e9a180 00:06:20.305 [2024-07-10 13:29:59.529924] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:20.305 pt2 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:20.305 13:29:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:20.563 13:29:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:20.563 "name": "raid_bdev1", 00:06:20.563 "uuid": "7f9825ac-3ec0-11ef-b9c4-5b09e08d4792", 00:06:20.563 "strip_size_kb": 64, 00:06:20.563 "state": "online", 00:06:20.563 "raid_level": "raid0", 00:06:20.563 "superblock": true, 00:06:20.563 "num_base_bdevs": 2, 00:06:20.563 "num_base_bdevs_discovered": 2, 00:06:20.563 "num_base_bdevs_operational": 2, 00:06:20.563 "base_bdevs_list": [ 00:06:20.563 { 00:06:20.563 "name": "pt1", 00:06:20.563 "uuid": "66bccc76-8a95-d254-be88-d560058c70a3", 00:06:20.563 "is_configured": true, 00:06:20.563 "data_offset": 2048, 00:06:20.563 "data_size": 63488 00:06:20.563 }, 00:06:20.563 { 00:06:20.563 "name": "pt2", 00:06:20.563 "uuid": "47bbf278-fa03-5e5a-86c2-11723ba4ab91", 00:06:20.563 "is_configured": true, 00:06:20.563 "data_offset": 2048, 00:06:20.563 "data_size": 63488 00:06:20.563 } 00:06:20.563 ] 00:06:20.563 }' 00:06:20.563 13:29:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:20.563 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.821 13:30:00 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:20.821 13:30:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:06:21.079 [2024-07-10 13:30:00.169652] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@430 -- # '[' 7f9825ac-3ec0-11ef-b9c4-5b09e08d4792 '!=' 7f9825ac-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@511 -- # killprocess 47984 00:06:21.079 13:30:00 -- common/autotest_common.sh@926 -- # '[' -z 47984 ']' 00:06:21.079 13:30:00 -- common/autotest_common.sh@930 -- # kill -0 47984 00:06:21.079 13:30:00 -- common/autotest_common.sh@931 -- # uname 00:06:21.079 13:30:00 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:21.079 13:30:00 -- common/autotest_common.sh@934 -- # tail -1 00:06:21.079 13:30:00 -- common/autotest_common.sh@934 -- # ps -c -o command 47984 00:06:21.079 13:30:00 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:21.079 13:30:00 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:21.079 killing process with pid 47984 00:06:21.079 13:30:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47984' 00:06:21.079 13:30:00 -- common/autotest_common.sh@945 -- # kill 47984 00:06:21.079 [2024-07-10 13:30:00.201902] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:21.079 [2024-07-10 13:30:00.201930] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.079 [2024-07-10 13:30:00.201940] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.079 [2024-07-10 13:30:00.201944] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e9a180 name raid_bdev1, state offline 00:06:21.079 13:30:00 -- common/autotest_common.sh@950 -- # wait 47984 00:06:21.079 [2024-07-10 13:30:00.211409] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:06:21.079 00:06:21.079 real 0m5.521s 00:06:21.079 user 0m9.111s 00:06:21.079 sys 0m1.238s 00:06:21.079 13:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.079 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:21.079 ************************************ 00:06:21.079 END TEST raid_superblock_test 00:06:21.079 ************************************ 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:21.079 13:30:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:21.079 13:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.079 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:21.079 ************************************ 00:06:21.079 START TEST raid_state_function_test 00:06:21.079 ************************************ 00:06:21.079 13:30:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:21.079 13:30:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=48129 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48129' 00:06:21.080 Process raid pid: 48129 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:21.080 13:30:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48129 /var/tmp/spdk-raid.sock 00:06:21.080 13:30:00 -- common/autotest_common.sh@819 -- # '[' -z 48129 ']' 00:06:21.080 13:30:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:21.080 13:30:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:21.080 13:30:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:21.080 13:30:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.080 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:21.339 [2024-07-10 13:30:00.429339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.339 [2024-07-10 13:30:00.429608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:21.596 EAL: TSC is not safe to use in SMP mode 00:06:21.596 EAL: TSC is not invariant 00:06:21.596 [2024-07-10 13:30:00.860969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.855 [2024-07-10 13:30:00.953936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.855 [2024-07-10 13:30:00.954388] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.855 [2024-07-10 13:30:00.954397] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.113 13:30:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.113 13:30:01 -- common/autotest_common.sh@852 -- # return 0 00:06:22.113 13:30:01 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:22.372 [2024-07-10 13:30:01.517462] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:22.372 [2024-07-10 13:30:01.517540] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:22.372 [2024-07-10 13:30:01.517545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:22.372 [2024-07-10 13:30:01.517552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:22.372 13:30:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:22.630 13:30:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:22.630 "name": "Existed_Raid", 00:06:22.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.630 "strip_size_kb": 64, 00:06:22.630 "state": "configuring", 00:06:22.630 "raid_level": "concat", 00:06:22.630 "superblock": false, 00:06:22.630 "num_base_bdevs": 2, 00:06:22.630 "num_base_bdevs_discovered": 0, 00:06:22.630 "num_base_bdevs_operational": 2, 00:06:22.630 "base_bdevs_list": [ 00:06:22.630 { 00:06:22.630 "name": "BaseBdev1", 00:06:22.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.630 "is_configured": false, 00:06:22.630 "data_offset": 0, 00:06:22.630 "data_size": 0 00:06:22.630 }, 00:06:22.630 { 00:06:22.630 "name": "BaseBdev2", 00:06:22.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.630 "is_configured": false, 00:06:22.630 "data_offset": 0, 00:06:22.630 "data_size": 0 00:06:22.630 } 00:06:22.630 ] 00:06:22.630 }' 00:06:22.630 13:30:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:22.630 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:06:22.894 13:30:01 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:22.894 [2024-07-10 13:30:02.169466] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:22.894 [2024-07-10 13:30:02.169491] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d73c500 name Existed_Raid, state configuring 00:06:22.894 13:30:02 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:23.153 [2024-07-10 13:30:02.361474] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:23.153 [2024-07-10 13:30:02.361516] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:23.153 [2024-07-10 13:30:02.361519] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:23.153 [2024-07-10 13:30:02.361525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:23.153 13:30:02 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:23.413 [2024-07-10 13:30:02.554296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:23.413 BaseBdev1 00:06:23.413 13:30:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:23.413 13:30:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:23.413 13:30:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:23.413 13:30:02 -- common/autotest_common.sh@889 -- # local i 00:06:23.413 13:30:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:23.413 13:30:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:23.413 13:30:02 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:23.672 13:30:02 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:23.672 [ 00:06:23.672 { 00:06:23.672 "name": "BaseBdev1", 00:06:23.672 "aliases": [ 00:06:23.672 "8313de72-3ec0-11ef-b9c4-5b09e08d4792" 00:06:23.672 ], 00:06:23.672 "product_name": "Malloc disk", 00:06:23.672 "block_size": 512, 00:06:23.672 "num_blocks": 65536, 00:06:23.672 "uuid": "8313de72-3ec0-11ef-b9c4-5b09e08d4792", 00:06:23.672 "assigned_rate_limits": { 00:06:23.672 "rw_ios_per_sec": 0, 00:06:23.672 "rw_mbytes_per_sec": 0, 00:06:23.672 "r_mbytes_per_sec": 0, 00:06:23.672 "w_mbytes_per_sec": 0 00:06:23.672 }, 00:06:23.672 "claimed": true, 00:06:23.672 "claim_type": "exclusive_write", 00:06:23.672 "zoned": false, 00:06:23.672 "supported_io_types": { 00:06:23.672 "read": true, 00:06:23.672 "write": true, 00:06:23.672 "unmap": true, 00:06:23.672 "write_zeroes": true, 00:06:23.672 "flush": true, 00:06:23.672 "reset": true, 00:06:23.672 "compare": false, 00:06:23.672 "compare_and_write": false, 00:06:23.672 "abort": true, 00:06:23.672 "nvme_admin": false, 00:06:23.672 "nvme_io": false 00:06:23.672 }, 00:06:23.672 "memory_domains": [ 00:06:23.672 { 00:06:23.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.672 "dma_device_type": 2 00:06:23.672 } 00:06:23.672 ], 00:06:23.672 "driver_specific": {} 00:06:23.672 } 00:06:23.672 ] 00:06:23.672 13:30:02 -- common/autotest_common.sh@895 -- # return 0 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:23.672 13:30:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:23.930 13:30:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:23.930 "name": "Existed_Raid", 00:06:23.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.930 "strip_size_kb": 64, 00:06:23.930 "state": "configuring", 00:06:23.930 "raid_level": "concat", 00:06:23.930 "superblock": false, 00:06:23.930 "num_base_bdevs": 2, 00:06:23.930 "num_base_bdevs_discovered": 1, 00:06:23.930 "num_base_bdevs_operational": 2, 00:06:23.930 "base_bdevs_list": [ 00:06:23.930 { 00:06:23.930 "name": "BaseBdev1", 00:06:23.930 "uuid": "8313de72-3ec0-11ef-b9c4-5b09e08d4792", 00:06:23.930 "is_configured": true, 00:06:23.930 "data_offset": 0, 00:06:23.930 "data_size": 65536 00:06:23.930 }, 00:06:23.930 { 00:06:23.930 "name": "BaseBdev2", 00:06:23.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.930 "is_configured": false, 00:06:23.930 "data_offset": 0, 00:06:23.930 "data_size": 0 00:06:23.930 } 00:06:23.930 ] 00:06:23.930 }' 00:06:23.930 13:30:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:23.930 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.189 13:30:03 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:24.449 [2024-07-10 13:30:03.605523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:24.449 [2024-07-10 13:30:03.605556] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d73c500 name Existed_Raid, state configuring 00:06:24.449 13:30:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:06:24.449 13:30:03 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:24.449 [2024-07-10 13:30:03.789538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:24.449 [2024-07-10 13:30:03.790142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:24.449 [2024-07-10 13:30:03.790181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:24.709 "name": "Existed_Raid", 00:06:24.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.709 "strip_size_kb": 64, 00:06:24.709 "state": "configuring", 00:06:24.709 "raid_level": "concat", 00:06:24.709 "superblock": false, 00:06:24.709 "num_base_bdevs": 2, 00:06:24.709 "num_base_bdevs_discovered": 1, 00:06:24.709 "num_base_bdevs_operational": 2, 00:06:24.709 "base_bdevs_list": [ 00:06:24.709 { 00:06:24.709 "name": "BaseBdev1", 00:06:24.709 "uuid": "8313de72-3ec0-11ef-b9c4-5b09e08d4792", 00:06:24.709 "is_configured": true, 00:06:24.709 "data_offset": 0, 00:06:24.709 "data_size": 65536 00:06:24.709 }, 00:06:24.709 { 00:06:24.709 "name": "BaseBdev2", 00:06:24.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.709 "is_configured": false, 00:06:24.709 "data_offset": 0, 00:06:24.709 "data_size": 0 00:06:24.709 } 00:06:24.709 ] 00:06:24.709 }' 00:06:24.709 13:30:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:24.709 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:24.969 13:30:04 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:25.229 [2024-07-10 13:30:04.457706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:25.229 [2024-07-10 13:30:04.457738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d73ca00 00:06:25.229 [2024-07-10 13:30:04.457744] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:25.229 [2024-07-10 13:30:04.457772] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d79fec0 00:06:25.229 [2024-07-10 13:30:04.457870] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d73ca00 00:06:25.229 [2024-07-10 13:30:04.457876] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d73ca00 00:06:25.229 [2024-07-10 13:30:04.457919] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.229 BaseBdev2 00:06:25.229 13:30:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:25.229 13:30:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:25.229 13:30:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:25.229 13:30:04 -- common/autotest_common.sh@889 -- # local i 00:06:25.229 13:30:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:25.229 13:30:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:25.229 13:30:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:25.489 13:30:04 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:25.489 [ 00:06:25.489 { 00:06:25.489 "name": "BaseBdev2", 00:06:25.489 "aliases": [ 00:06:25.489 "8436677c-3ec0-11ef-b9c4-5b09e08d4792" 00:06:25.489 ], 00:06:25.489 "product_name": "Malloc disk", 00:06:25.489 "block_size": 512, 00:06:25.489 "num_blocks": 65536, 00:06:25.489 "uuid": "8436677c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:25.489 "assigned_rate_limits": { 00:06:25.489 "rw_ios_per_sec": 0, 00:06:25.489 "rw_mbytes_per_sec": 0, 00:06:25.489 "r_mbytes_per_sec": 0, 00:06:25.489 "w_mbytes_per_sec": 0 00:06:25.489 }, 00:06:25.489 "claimed": true, 00:06:25.489 "claim_type": "exclusive_write", 00:06:25.489 "zoned": false, 00:06:25.489 "supported_io_types": { 00:06:25.489 "read": true, 00:06:25.489 "write": true, 00:06:25.489 "unmap": true, 00:06:25.489 "write_zeroes": true, 00:06:25.489 "flush": true, 00:06:25.489 "reset": true, 00:06:25.489 "compare": false, 00:06:25.489 "compare_and_write": false, 00:06:25.489 "abort": true, 00:06:25.489 "nvme_admin": false, 00:06:25.489 "nvme_io": false 00:06:25.489 }, 00:06:25.489 "memory_domains": [ 00:06:25.489 { 00:06:25.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.489 "dma_device_type": 2 00:06:25.489 } 00:06:25.489 ], 00:06:25.489 "driver_specific": {} 00:06:25.489 } 00:06:25.489 ] 00:06:25.748 13:30:04 -- common/autotest_common.sh@895 -- # return 0 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:25.748 13:30:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:25.748 13:30:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:25.748 "name": "Existed_Raid", 00:06:25.748 "uuid": "84366eed-3ec0-11ef-b9c4-5b09e08d4792", 00:06:25.748 "strip_size_kb": 64, 00:06:25.748 "state": "online", 00:06:25.748 "raid_level": "concat", 00:06:25.748 "superblock": false, 00:06:25.748 "num_base_bdevs": 2, 00:06:25.748 "num_base_bdevs_discovered": 2, 00:06:25.748 "num_base_bdevs_operational": 2, 00:06:25.748 "base_bdevs_list": [ 00:06:25.748 { 00:06:25.748 "name": "BaseBdev1", 00:06:25.748 "uuid": "8313de72-3ec0-11ef-b9c4-5b09e08d4792", 00:06:25.748 "is_configured": true, 00:06:25.748 "data_offset": 0, 00:06:25.748 "data_size": 65536 00:06:25.748 }, 00:06:25.748 { 00:06:25.748 "name": "BaseBdev2", 00:06:25.748 "uuid": "8436677c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:25.748 "is_configured": true, 00:06:25.748 "data_offset": 0, 00:06:25.748 "data_size": 65536 00:06:25.748 } 00:06:25.748 ] 00:06:25.748 }' 00:06:25.748 13:30:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:25.748 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.006 13:30:05 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:26.265 [2024-07-10 13:30:05.477591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:26.265 [2024-07-10 13:30:05.477617] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:26.265 [2024-07-10 13:30:05.477630] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:26.265 13:30:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:26.525 13:30:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:26.525 "name": "Existed_Raid", 00:06:26.525 "uuid": "84366eed-3ec0-11ef-b9c4-5b09e08d4792", 00:06:26.525 "strip_size_kb": 64, 00:06:26.525 "state": "offline", 00:06:26.525 "raid_level": "concat", 00:06:26.525 "superblock": false, 00:06:26.525 "num_base_bdevs": 2, 00:06:26.525 "num_base_bdevs_discovered": 1, 00:06:26.525 "num_base_bdevs_operational": 1, 00:06:26.525 "base_bdevs_list": [ 00:06:26.525 { 00:06:26.525 "name": null, 00:06:26.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:26.525 "is_configured": false, 00:06:26.525 "data_offset": 0, 00:06:26.525 "data_size": 65536 00:06:26.525 }, 00:06:26.525 { 00:06:26.525 "name": "BaseBdev2", 00:06:26.525 "uuid": "8436677c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:26.525 "is_configured": true, 00:06:26.525 "data_offset": 0, 00:06:26.525 "data_size": 65536 00:06:26.525 } 00:06:26.525 ] 00:06:26.525 }' 00:06:26.525 13:30:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:26.525 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 13:30:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:26.785 13:30:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:26.785 13:30:05 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:26.785 13:30:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:27.044 [2024-07-10 13:30:06.350264] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:27.044 [2024-07-10 13:30:06.350297] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d73ca00 name Existed_Raid, state offline 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:27.044 13:30:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:27.303 13:30:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:27.303 13:30:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:27.303 13:30:06 -- bdev/bdev_raid.sh@287 -- # killprocess 48129 00:06:27.303 13:30:06 -- common/autotest_common.sh@926 -- # '[' -z 48129 ']' 00:06:27.303 13:30:06 -- common/autotest_common.sh@930 -- # kill -0 48129 00:06:27.303 13:30:06 -- common/autotest_common.sh@931 -- # uname 00:06:27.303 13:30:06 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:27.303 13:30:06 -- common/autotest_common.sh@934 -- # tail -1 00:06:27.303 13:30:06 -- common/autotest_common.sh@934 -- # ps -c -o command 48129 00:06:27.303 13:30:06 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:27.303 13:30:06 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:27.303 killing process with pid 48129 00:06:27.303 13:30:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48129' 00:06:27.303 13:30:06 -- common/autotest_common.sh@945 -- # kill 48129 00:06:27.303 [2024-07-10 13:30:06.575370] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.303 [2024-07-10 13:30:06.575414] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.303 13:30:06 -- common/autotest_common.sh@950 -- # wait 48129 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:27.563 00:06:27.563 real 0m6.313s 00:06:27.563 user 0m10.629s 00:06:27.563 sys 0m1.295s 00:06:27.563 13:30:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.563 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 ************************************ 00:06:27.563 END TEST raid_state_function_test 00:06:27.563 ************************************ 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:27.563 13:30:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:27.563 13:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.563 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 ************************************ 00:06:27.563 START TEST raid_state_function_test_sb 00:06:27.563 ************************************ 00:06:27.563 13:30:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=48325 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48325' 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:27.563 Process raid pid: 48325 00:06:27.563 13:30:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48325 /var/tmp/spdk-raid.sock 00:06:27.563 13:30:06 -- common/autotest_common.sh@819 -- # '[' -z 48325 ']' 00:06:27.563 13:30:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:27.563 13:30:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:27.563 13:30:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:27.563 13:30:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.563 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 [2024-07-10 13:30:06.793041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:27.563 [2024-07-10 13:30:06.793331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:28.131 EAL: TSC is not safe to use in SMP mode 00:06:28.131 EAL: TSC is not invariant 00:06:28.131 [2024-07-10 13:30:07.226219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.131 [2024-07-10 13:30:07.316602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.131 [2024-07-10 13:30:07.317062] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.131 [2024-07-10 13:30:07.317072] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.390 13:30:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.390 13:30:07 -- common/autotest_common.sh@852 -- # return 0 00:06:28.390 13:30:07 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:28.650 [2024-07-10 13:30:07.904136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:28.650 [2024-07-10 13:30:07.904188] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:28.650 [2024-07-10 13:30:07.904199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.650 [2024-07-10 13:30:07.904205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:28.650 13:30:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:28.651 13:30:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:28.651 13:30:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:28.651 13:30:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:28.651 13:30:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.651 13:30:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:28.910 13:30:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:28.910 "name": "Existed_Raid", 00:06:28.910 "uuid": "86444f1e-3ec0-11ef-b9c4-5b09e08d4792", 00:06:28.910 "strip_size_kb": 64, 00:06:28.910 "state": "configuring", 00:06:28.910 "raid_level": "concat", 00:06:28.910 "superblock": true, 00:06:28.910 "num_base_bdevs": 2, 00:06:28.910 "num_base_bdevs_discovered": 0, 00:06:28.910 "num_base_bdevs_operational": 2, 00:06:28.910 "base_bdevs_list": [ 00:06:28.910 { 00:06:28.910 "name": "BaseBdev1", 00:06:28.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.910 "is_configured": false, 00:06:28.910 "data_offset": 0, 00:06:28.910 "data_size": 0 00:06:28.910 }, 00:06:28.910 { 00:06:28.910 "name": "BaseBdev2", 00:06:28.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.910 "is_configured": false, 00:06:28.910 "data_offset": 0, 00:06:28.910 "data_size": 0 00:06:28.910 } 00:06:28.910 ] 00:06:28.910 }' 00:06:28.910 13:30:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:28.910 13:30:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.170 13:30:08 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:29.429 [2024-07-10 13:30:08.576094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:29.429 [2024-07-10 13:30:08.576118] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d53d500 name Existed_Raid, state configuring 00:06:29.429 13:30:08 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:29.687 [2024-07-10 13:30:08.776113] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:29.687 [2024-07-10 13:30:08.776153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:29.687 [2024-07-10 13:30:08.776157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:29.687 [2024-07-10 13:30:08.776163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:29.687 13:30:08 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:29.687 [2024-07-10 13:30:08.968923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:29.687 BaseBdev1 00:06:29.687 13:30:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:29.687 13:30:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:29.687 13:30:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:29.687 13:30:08 -- common/autotest_common.sh@889 -- # local i 00:06:29.688 13:30:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:29.688 13:30:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:29.688 13:30:08 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:29.947 13:30:09 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:30.205 [ 00:06:30.205 { 00:06:30.205 "name": "BaseBdev1", 00:06:30.205 "aliases": [ 00:06:30.205 "86e6aa60-3ec0-11ef-b9c4-5b09e08d4792" 00:06:30.205 ], 00:06:30.205 "product_name": "Malloc disk", 00:06:30.205 "block_size": 512, 00:06:30.205 "num_blocks": 65536, 00:06:30.205 "uuid": "86e6aa60-3ec0-11ef-b9c4-5b09e08d4792", 00:06:30.205 "assigned_rate_limits": { 00:06:30.205 "rw_ios_per_sec": 0, 00:06:30.205 "rw_mbytes_per_sec": 0, 00:06:30.205 "r_mbytes_per_sec": 0, 00:06:30.205 "w_mbytes_per_sec": 0 00:06:30.205 }, 00:06:30.205 "claimed": true, 00:06:30.205 "claim_type": "exclusive_write", 00:06:30.205 "zoned": false, 00:06:30.205 "supported_io_types": { 00:06:30.205 "read": true, 00:06:30.205 "write": true, 00:06:30.205 "unmap": true, 00:06:30.205 "write_zeroes": true, 00:06:30.205 "flush": true, 00:06:30.205 "reset": true, 00:06:30.205 "compare": false, 00:06:30.205 "compare_and_write": false, 00:06:30.205 "abort": true, 00:06:30.205 "nvme_admin": false, 00:06:30.205 "nvme_io": false 00:06:30.205 }, 00:06:30.205 "memory_domains": [ 00:06:30.205 { 00:06:30.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.205 "dma_device_type": 2 00:06:30.205 } 00:06:30.205 ], 00:06:30.205 "driver_specific": {} 00:06:30.205 } 00:06:30.205 ] 00:06:30.205 13:30:09 -- common/autotest_common.sh@895 -- # return 0 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.205 13:30:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:30.462 13:30:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:30.462 "name": "Existed_Raid", 00:06:30.462 "uuid": "86c95cbf-3ec0-11ef-b9c4-5b09e08d4792", 00:06:30.462 "strip_size_kb": 64, 00:06:30.462 "state": "configuring", 00:06:30.462 "raid_level": "concat", 00:06:30.462 "superblock": true, 00:06:30.462 "num_base_bdevs": 2, 00:06:30.462 "num_base_bdevs_discovered": 1, 00:06:30.462 "num_base_bdevs_operational": 2, 00:06:30.462 "base_bdevs_list": [ 00:06:30.462 { 00:06:30.462 "name": "BaseBdev1", 00:06:30.462 "uuid": "86e6aa60-3ec0-11ef-b9c4-5b09e08d4792", 00:06:30.462 "is_configured": true, 00:06:30.462 "data_offset": 2048, 00:06:30.462 "data_size": 63488 00:06:30.462 }, 00:06:30.462 { 00:06:30.462 "name": "BaseBdev2", 00:06:30.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.462 "is_configured": false, 00:06:30.462 "data_offset": 0, 00:06:30.462 "data_size": 0 00:06:30.462 } 00:06:30.462 ] 00:06:30.462 }' 00:06:30.462 13:30:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:30.462 13:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.720 13:30:09 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:30.720 [2024-07-10 13:30:10.056183] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:30.720 [2024-07-10 13:30:10.056215] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d53d500 name Existed_Raid, state configuring 00:06:30.979 13:30:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:06:30.979 13:30:10 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:30.979 13:30:10 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:31.238 BaseBdev1 00:06:31.238 13:30:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:06:31.238 13:30:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:31.238 13:30:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:31.238 13:30:10 -- common/autotest_common.sh@889 -- # local i 00:06:31.238 13:30:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:31.238 13:30:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:31.238 13:30:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:31.497 13:30:10 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:31.756 [ 00:06:31.756 { 00:06:31.756 "name": "BaseBdev1", 00:06:31.756 "aliases": [ 00:06:31.756 "87cb0c2c-3ec0-11ef-b9c4-5b09e08d4792" 00:06:31.756 ], 00:06:31.756 "product_name": "Malloc disk", 00:06:31.756 "block_size": 512, 00:06:31.756 "num_blocks": 65536, 00:06:31.756 "uuid": "87cb0c2c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:31.756 "assigned_rate_limits": { 00:06:31.756 "rw_ios_per_sec": 0, 00:06:31.756 "rw_mbytes_per_sec": 0, 00:06:31.756 "r_mbytes_per_sec": 0, 00:06:31.756 "w_mbytes_per_sec": 0 00:06:31.756 }, 00:06:31.756 "claimed": false, 00:06:31.756 "zoned": false, 00:06:31.756 "supported_io_types": { 00:06:31.756 "read": true, 00:06:31.756 "write": true, 00:06:31.756 "unmap": true, 00:06:31.756 "write_zeroes": true, 00:06:31.756 "flush": true, 00:06:31.756 "reset": true, 00:06:31.756 "compare": false, 00:06:31.756 "compare_and_write": false, 00:06:31.756 "abort": true, 00:06:31.756 "nvme_admin": false, 00:06:31.756 "nvme_io": false 00:06:31.756 }, 00:06:31.756 "memory_domains": [ 00:06:31.756 { 00:06:31.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.756 "dma_device_type": 2 00:06:31.756 } 00:06:31.756 ], 00:06:31.756 "driver_specific": {} 00:06:31.756 } 00:06:31.756 ] 00:06:31.756 13:30:10 -- common/autotest_common.sh@895 -- # return 0 00:06:31.756 13:30:10 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:31.756 [2024-07-10 13:30:11.048906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:31.756 [2024-07-10 13:30:11.049352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:31.756 [2024-07-10 13:30:11.049399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:31.756 13:30:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.043 13:30:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:32.043 "name": "Existed_Raid", 00:06:32.043 "uuid": "88242969-3ec0-11ef-b9c4-5b09e08d4792", 00:06:32.043 "strip_size_kb": 64, 00:06:32.043 "state": "configuring", 00:06:32.043 "raid_level": "concat", 00:06:32.043 "superblock": true, 00:06:32.043 "num_base_bdevs": 2, 00:06:32.043 "num_base_bdevs_discovered": 1, 00:06:32.043 "num_base_bdevs_operational": 2, 00:06:32.043 "base_bdevs_list": [ 00:06:32.043 { 00:06:32.043 "name": "BaseBdev1", 00:06:32.043 "uuid": "87cb0c2c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:32.043 "is_configured": true, 00:06:32.043 "data_offset": 2048, 00:06:32.043 "data_size": 63488 00:06:32.043 }, 00:06:32.043 { 00:06:32.043 "name": "BaseBdev2", 00:06:32.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.043 "is_configured": false, 00:06:32.043 "data_offset": 0, 00:06:32.043 "data_size": 0 00:06:32.043 } 00:06:32.043 ] 00:06:32.043 }' 00:06:32.043 13:30:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:32.043 13:30:11 -- common/autotest_common.sh@10 -- # set +x 00:06:32.300 13:30:11 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:32.559 [2024-07-10 13:30:11.696990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:32.559 [2024-07-10 13:30:11.697047] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d53da00 00:06:32.559 [2024-07-10 13:30:11.697052] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:32.559 [2024-07-10 13:30:11.697069] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d5a0ec0 00:06:32.559 [2024-07-10 13:30:11.697097] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d53da00 00:06:32.559 [2024-07-10 13:30:11.697100] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d53da00 00:06:32.559 [2024-07-10 13:30:11.697113] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.559 BaseBdev2 00:06:32.559 13:30:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:32.559 13:30:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:32.559 13:30:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:32.559 13:30:11 -- common/autotest_common.sh@889 -- # local i 00:06:32.559 13:30:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:32.559 13:30:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:32.559 13:30:11 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:32.818 13:30:11 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:32.818 [ 00:06:32.818 { 00:06:32.818 "name": "BaseBdev2", 00:06:32.818 "aliases": [ 00:06:32.818 "888709b8-3ec0-11ef-b9c4-5b09e08d4792" 00:06:32.818 ], 00:06:32.818 "product_name": "Malloc disk", 00:06:32.818 "block_size": 512, 00:06:32.818 "num_blocks": 65536, 00:06:32.818 "uuid": "888709b8-3ec0-11ef-b9c4-5b09e08d4792", 00:06:32.818 "assigned_rate_limits": { 00:06:32.818 "rw_ios_per_sec": 0, 00:06:32.818 "rw_mbytes_per_sec": 0, 00:06:32.818 "r_mbytes_per_sec": 0, 00:06:32.818 "w_mbytes_per_sec": 0 00:06:32.818 }, 00:06:32.818 "claimed": true, 00:06:32.818 "claim_type": "exclusive_write", 00:06:32.818 "zoned": false, 00:06:32.818 "supported_io_types": { 00:06:32.818 "read": true, 00:06:32.818 "write": true, 00:06:32.818 "unmap": true, 00:06:32.818 "write_zeroes": true, 00:06:32.818 "flush": true, 00:06:32.818 "reset": true, 00:06:32.818 "compare": false, 00:06:32.818 "compare_and_write": false, 00:06:32.818 "abort": true, 00:06:32.818 "nvme_admin": false, 00:06:32.818 "nvme_io": false 00:06:32.818 }, 00:06:32.818 "memory_domains": [ 00:06:32.818 { 00:06:32.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.818 "dma_device_type": 2 00:06:32.818 } 00:06:32.818 ], 00:06:32.818 "driver_specific": {} 00:06:32.818 } 00:06:32.818 ] 00:06:32.818 13:30:12 -- common/autotest_common.sh@895 -- # return 0 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:32.818 13:30:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:32.819 13:30:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:32.819 13:30:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.078 13:30:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:33.078 "name": "Existed_Raid", 00:06:33.078 "uuid": "88242969-3ec0-11ef-b9c4-5b09e08d4792", 00:06:33.078 "strip_size_kb": 64, 00:06:33.078 "state": "online", 00:06:33.078 "raid_level": "concat", 00:06:33.078 "superblock": true, 00:06:33.078 "num_base_bdevs": 2, 00:06:33.078 "num_base_bdevs_discovered": 2, 00:06:33.078 "num_base_bdevs_operational": 2, 00:06:33.078 "base_bdevs_list": [ 00:06:33.078 { 00:06:33.078 "name": "BaseBdev1", 00:06:33.078 "uuid": "87cb0c2c-3ec0-11ef-b9c4-5b09e08d4792", 00:06:33.078 "is_configured": true, 00:06:33.078 "data_offset": 2048, 00:06:33.078 "data_size": 63488 00:06:33.078 }, 00:06:33.078 { 00:06:33.078 "name": "BaseBdev2", 00:06:33.078 "uuid": "888709b8-3ec0-11ef-b9c4-5b09e08d4792", 00:06:33.078 "is_configured": true, 00:06:33.078 "data_offset": 2048, 00:06:33.078 "data_size": 63488 00:06:33.078 } 00:06:33.078 ] 00:06:33.078 }' 00:06:33.078 13:30:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:33.078 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.337 13:30:12 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:33.595 [2024-07-10 13:30:12.728926] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:33.595 [2024-07-10 13:30:12.728952] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:33.595 [2024-07-10 13:30:12.728965] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:33.595 13:30:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.853 13:30:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:33.853 "name": "Existed_Raid", 00:06:33.853 "uuid": "88242969-3ec0-11ef-b9c4-5b09e08d4792", 00:06:33.853 "strip_size_kb": 64, 00:06:33.853 "state": "offline", 00:06:33.853 "raid_level": "concat", 00:06:33.853 "superblock": true, 00:06:33.853 "num_base_bdevs": 2, 00:06:33.853 "num_base_bdevs_discovered": 1, 00:06:33.853 "num_base_bdevs_operational": 1, 00:06:33.853 "base_bdevs_list": [ 00:06:33.853 { 00:06:33.853 "name": null, 00:06:33.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.853 "is_configured": false, 00:06:33.853 "data_offset": 2048, 00:06:33.853 "data_size": 63488 00:06:33.853 }, 00:06:33.853 { 00:06:33.853 "name": "BaseBdev2", 00:06:33.853 "uuid": "888709b8-3ec0-11ef-b9c4-5b09e08d4792", 00:06:33.853 "is_configured": true, 00:06:33.853 "data_offset": 2048, 00:06:33.853 "data_size": 63488 00:06:33.853 } 00:06:33.853 ] 00:06:33.853 }' 00:06:33.853 13:30:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:33.853 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:34.111 13:30:13 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:34.369 [2024-07-10 13:30:13.601700] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:34.369 [2024-07-10 13:30:13.601732] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d53da00 name Existed_Raid, state offline 00:06:34.369 13:30:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:34.369 13:30:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:34.369 13:30:13 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:34.369 13:30:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:34.626 13:30:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:34.626 13:30:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:34.626 13:30:13 -- bdev/bdev_raid.sh@287 -- # killprocess 48325 00:06:34.626 13:30:13 -- common/autotest_common.sh@926 -- # '[' -z 48325 ']' 00:06:34.626 13:30:13 -- common/autotest_common.sh@930 -- # kill -0 48325 00:06:34.626 13:30:13 -- common/autotest_common.sh@931 -- # uname 00:06:34.626 13:30:13 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:34.626 13:30:13 -- common/autotest_common.sh@934 -- # tail -1 00:06:34.626 13:30:13 -- common/autotest_common.sh@934 -- # ps -c -o command 48325 00:06:34.626 13:30:13 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:34.626 13:30:13 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:34.626 killing process with pid 48325 00:06:34.626 13:30:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48325' 00:06:34.626 13:30:13 -- common/autotest_common.sh@945 -- # kill 48325 00:06:34.626 [2024-07-10 13:30:13.826976] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.626 [2024-07-10 13:30:13.827018] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.626 13:30:13 -- common/autotest_common.sh@950 -- # wait 48325 00:06:34.884 13:30:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:34.884 00:06:34.884 real 0m7.202s 00:06:34.884 user 0m12.407s 00:06:34.884 sys 0m1.278s 00:06:34.884 13:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.884 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 ************************************ 00:06:34.884 END TEST raid_state_function_test_sb 00:06:34.884 ************************************ 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:34.884 13:30:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:34.884 13:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.884 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 ************************************ 00:06:34.884 START TEST raid_superblock_test 00:06:34.884 ************************************ 00:06:34.884 13:30:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=48524 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48524 /var/tmp/spdk-raid.sock 00:06:34.884 13:30:14 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:34.884 13:30:14 -- common/autotest_common.sh@819 -- # '[' -z 48524 ']' 00:06:34.884 13:30:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:34.884 13:30:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:34.884 13:30:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:34.884 13:30:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.884 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 [2024-07-10 13:30:14.040255] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:34.884 [2024-07-10 13:30:14.040584] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:35.144 EAL: TSC is not safe to use in SMP mode 00:06:35.144 EAL: TSC is not invariant 00:06:35.402 [2024-07-10 13:30:14.488025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.402 [2024-07-10 13:30:14.578258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.402 [2024-07-10 13:30:14.578719] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.402 [2024-07-10 13:30:14.578729] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.661 13:30:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.661 13:30:14 -- common/autotest_common.sh@852 -- # return 0 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:35.661 13:30:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:35.920 malloc1 00:06:35.920 13:30:15 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:36.178 [2024-07-10 13:30:15.389854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:36.178 [2024-07-10 13:30:15.389925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.178 [2024-07-10 13:30:15.390472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c01e780 00:06:36.178 [2024-07-10 13:30:15.390497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.178 [2024-07-10 13:30:15.391211] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.178 [2024-07-10 13:30:15.391241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:36.178 pt1 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:36.178 13:30:15 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:36.436 malloc2 00:06:36.437 13:30:15 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:36.695 [2024-07-10 13:30:15.789863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:36.695 [2024-07-10 13:30:15.789914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.695 [2024-07-10 13:30:15.789955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c01ec80 00:06:36.695 [2024-07-10 13:30:15.789961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.695 [2024-07-10 13:30:15.790461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.695 [2024-07-10 13:30:15.790488] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:36.695 pt2 00:06:36.695 13:30:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:36.695 13:30:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:36.695 13:30:15 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:06:36.695 [2024-07-10 13:30:15.989874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:36.695 [2024-07-10 13:30:15.990306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:36.695 [2024-07-10 13:30:15.990359] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c01ef00 00:06:36.695 [2024-07-10 13:30:15.990369] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:36.695 [2024-07-10 13:30:15.990396] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c081e20 00:06:36.695 [2024-07-10 13:30:15.990453] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c01ef00 00:06:36.695 [2024-07-10 13:30:15.990460] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c01ef00 00:06:36.695 [2024-07-10 13:30:15.990480] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:36.695 13:30:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:36.953 13:30:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:36.953 "name": "raid_bdev1", 00:06:36.953 "uuid": "8b16183d-3ec0-11ef-b9c4-5b09e08d4792", 00:06:36.953 "strip_size_kb": 64, 00:06:36.953 "state": "online", 00:06:36.953 "raid_level": "concat", 00:06:36.953 "superblock": true, 00:06:36.953 "num_base_bdevs": 2, 00:06:36.953 "num_base_bdevs_discovered": 2, 00:06:36.953 "num_base_bdevs_operational": 2, 00:06:36.953 "base_bdevs_list": [ 00:06:36.953 { 00:06:36.953 "name": "pt1", 00:06:36.953 "uuid": "92222598-eb10-9f5a-8e40-8dc875d44954", 00:06:36.953 "is_configured": true, 00:06:36.953 "data_offset": 2048, 00:06:36.953 "data_size": 63488 00:06:36.953 }, 00:06:36.953 { 00:06:36.953 "name": "pt2", 00:06:36.953 "uuid": "951b6a7d-e8b1-6b56-a9fe-3555edca1f3d", 00:06:36.953 "is_configured": true, 00:06:36.953 "data_offset": 2048, 00:06:36.953 "data_size": 63488 00:06:36.953 } 00:06:36.953 ] 00:06:36.953 }' 00:06:36.953 13:30:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:36.953 13:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 13:30:16 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:37.211 13:30:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:06:37.470 [2024-07-10 13:30:16.645904] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.470 13:30:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8b16183d-3ec0-11ef-b9c4-5b09e08d4792 00:06:37.470 13:30:16 -- bdev/bdev_raid.sh@380 -- # '[' -z 8b16183d-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:06:37.470 13:30:16 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:37.751 [2024-07-10 13:30:16.837884] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:37.751 [2024-07-10 13:30:16.837910] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:37.751 [2024-07-10 13:30:16.837929] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.751 [2024-07-10 13:30:16.837939] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.751 [2024-07-10 13:30:16.837943] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c01ef00 name raid_bdev1, state offline 00:06:37.751 13:30:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:06:37.751 13:30:16 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:37.751 13:30:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:06:37.751 13:30:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:06:37.751 13:30:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:37.751 13:30:17 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:38.034 13:30:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:38.034 13:30:17 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:38.293 13:30:17 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:38.293 13:30:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:38.293 13:30:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:06:38.293 13:30:17 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:06:38.293 13:30:17 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.293 13:30:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:06:38.293 13:30:17 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.293 13:30:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.293 13:30:17 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.293 13:30:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.293 13:30:17 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.293 13:30:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.293 13:30:17 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.293 13:30:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:38.293 13:30:17 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:06:38.552 [2024-07-10 13:30:17.785921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:38.552 [2024-07-10 13:30:17.786389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:38.552 [2024-07-10 13:30:17.786415] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:06:38.552 [2024-07-10 13:30:17.786450] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:06:38.553 [2024-07-10 13:30:17.786459] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.553 [2024-07-10 13:30:17.786463] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c01ec80 name raid_bdev1, state configuring 00:06:38.553 request: 00:06:38.553 { 00:06:38.553 "name": "raid_bdev1", 00:06:38.553 "raid_level": "concat", 00:06:38.553 "base_bdevs": [ 00:06:38.553 "malloc1", 00:06:38.553 "malloc2" 00:06:38.553 ], 00:06:38.553 "superblock": false, 00:06:38.553 "strip_size_kb": 64, 00:06:38.553 "method": "bdev_raid_create", 00:06:38.553 "req_id": 1 00:06:38.553 } 00:06:38.553 Got JSON-RPC error response 00:06:38.553 response: 00:06:38.553 { 00:06:38.553 "code": -17, 00:06:38.553 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:38.553 } 00:06:38.553 13:30:17 -- common/autotest_common.sh@643 -- # es=1 00:06:38.553 13:30:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.553 13:30:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.553 13:30:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.553 13:30:17 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:38.553 13:30:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:06:38.811 13:30:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:06:38.811 13:30:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:06:38.811 13:30:17 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:39.070 [2024-07-10 13:30:18.173908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:39.070 [2024-07-10 13:30:18.173966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.070 [2024-07-10 13:30:18.173993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c01e780 00:06:39.070 [2024-07-10 13:30:18.173999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.070 [2024-07-10 13:30:18.174510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.070 [2024-07-10 13:30:18.174538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:39.070 [2024-07-10 13:30:18.174559] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:06:39.070 [2024-07-10 13:30:18.174569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:39.070 pt1 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:39.070 13:30:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:39.070 "name": "raid_bdev1", 00:06:39.070 "uuid": "8b16183d-3ec0-11ef-b9c4-5b09e08d4792", 00:06:39.070 "strip_size_kb": 64, 00:06:39.070 "state": "configuring", 00:06:39.070 "raid_level": "concat", 00:06:39.070 "superblock": true, 00:06:39.070 "num_base_bdevs": 2, 00:06:39.070 "num_base_bdevs_discovered": 1, 00:06:39.070 "num_base_bdevs_operational": 2, 00:06:39.070 "base_bdevs_list": [ 00:06:39.070 { 00:06:39.070 "name": "pt1", 00:06:39.070 "uuid": "92222598-eb10-9f5a-8e40-8dc875d44954", 00:06:39.070 "is_configured": true, 00:06:39.070 "data_offset": 2048, 00:06:39.071 "data_size": 63488 00:06:39.071 }, 00:06:39.071 { 00:06:39.071 "name": null, 00:06:39.071 "uuid": "951b6a7d-e8b1-6b56-a9fe-3555edca1f3d", 00:06:39.071 "is_configured": false, 00:06:39.071 "data_offset": 2048, 00:06:39.071 "data_size": 63488 00:06:39.071 } 00:06:39.071 ] 00:06:39.071 }' 00:06:39.071 13:30:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:39.071 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:39.639 [2024-07-10 13:30:18.873919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:39.639 [2024-07-10 13:30:18.873969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.639 [2024-07-10 13:30:18.874008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c01ef00 00:06:39.639 [2024-07-10 13:30:18.874014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.639 [2024-07-10 13:30:18.874101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.639 [2024-07-10 13:30:18.874108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:39.639 [2024-07-10 13:30:18.874125] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:06:39.639 [2024-07-10 13:30:18.874131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:39.639 [2024-07-10 13:30:18.874150] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c01f180 00:06:39.639 [2024-07-10 13:30:18.874153] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:39.639 [2024-07-10 13:30:18.874167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c081e20 00:06:39.639 [2024-07-10 13:30:18.874204] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c01f180 00:06:39.639 [2024-07-10 13:30:18.874207] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c01f180 00:06:39.639 [2024-07-10 13:30:18.874222] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.639 pt2 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:39.639 13:30:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:39.897 13:30:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:39.897 "name": "raid_bdev1", 00:06:39.897 "uuid": "8b16183d-3ec0-11ef-b9c4-5b09e08d4792", 00:06:39.897 "strip_size_kb": 64, 00:06:39.897 "state": "online", 00:06:39.897 "raid_level": "concat", 00:06:39.897 "superblock": true, 00:06:39.897 "num_base_bdevs": 2, 00:06:39.897 "num_base_bdevs_discovered": 2, 00:06:39.897 "num_base_bdevs_operational": 2, 00:06:39.897 "base_bdevs_list": [ 00:06:39.897 { 00:06:39.897 "name": "pt1", 00:06:39.897 "uuid": "92222598-eb10-9f5a-8e40-8dc875d44954", 00:06:39.897 "is_configured": true, 00:06:39.897 "data_offset": 2048, 00:06:39.897 "data_size": 63488 00:06:39.897 }, 00:06:39.897 { 00:06:39.897 "name": "pt2", 00:06:39.897 "uuid": "951b6a7d-e8b1-6b56-a9fe-3555edca1f3d", 00:06:39.897 "is_configured": true, 00:06:39.897 "data_offset": 2048, 00:06:39.897 "data_size": 63488 00:06:39.897 } 00:06:39.897 ] 00:06:39.897 }' 00:06:39.897 13:30:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:39.897 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:06:40.155 13:30:19 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:40.155 13:30:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:06:40.414 [2024-07-10 13:30:19.593969] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.414 13:30:19 -- bdev/bdev_raid.sh@430 -- # '[' 8b16183d-3ec0-11ef-b9c4-5b09e08d4792 '!=' 8b16183d-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:06:40.414 13:30:19 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:06:40.414 13:30:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:40.414 13:30:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:40.414 13:30:19 -- bdev/bdev_raid.sh@511 -- # killprocess 48524 00:06:40.414 13:30:19 -- common/autotest_common.sh@926 -- # '[' -z 48524 ']' 00:06:40.414 13:30:19 -- common/autotest_common.sh@930 -- # kill -0 48524 00:06:40.414 13:30:19 -- common/autotest_common.sh@931 -- # uname 00:06:40.414 13:30:19 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:40.414 13:30:19 -- common/autotest_common.sh@934 -- # ps -c -o command 48524 00:06:40.414 13:30:19 -- common/autotest_common.sh@934 -- # tail -1 00:06:40.414 13:30:19 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:40.414 13:30:19 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:40.414 killing process with pid 48524 00:06:40.414 13:30:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48524' 00:06:40.414 13:30:19 -- common/autotest_common.sh@945 -- # kill 48524 00:06:40.414 [2024-07-10 13:30:19.628356] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.414 [2024-07-10 13:30:19.628396] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.414 [2024-07-10 13:30:19.628408] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.414 [2024-07-10 13:30:19.628412] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c01f180 name raid_bdev1, state offline 00:06:40.414 13:30:19 -- common/autotest_common.sh@950 -- # wait 48524 00:06:40.414 [2024-07-10 13:30:19.638034] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:06:40.674 00:06:40.674 real 0m5.763s 00:06:40.674 user 0m9.687s 00:06:40.674 sys 0m1.146s 00:06:40.674 13:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.674 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:06:40.674 ************************************ 00:06:40.674 END TEST raid_superblock_test 00:06:40.674 ************************************ 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:06:40.674 13:30:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:40.674 13:30:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.674 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:06:40.674 ************************************ 00:06:40.674 START TEST raid_state_function_test 00:06:40.674 ************************************ 00:06:40.674 13:30:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=48669 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48669' 00:06:40.674 Process raid pid: 48669 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48669 /var/tmp/spdk-raid.sock 00:06:40.674 13:30:19 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:40.674 13:30:19 -- common/autotest_common.sh@819 -- # '[' -z 48669 ']' 00:06:40.674 13:30:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:40.674 13:30:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:40.674 13:30:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:40.674 13:30:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.674 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:06:40.674 [2024-07-10 13:30:19.859911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.674 [2024-07-10 13:30:19.860178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:41.242 EAL: TSC is not safe to use in SMP mode 00:06:41.242 EAL: TSC is not invariant 00:06:41.242 [2024-07-10 13:30:20.314651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.242 [2024-07-10 13:30:20.406814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.242 [2024-07-10 13:30:20.407320] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.242 [2024-07-10 13:30:20.407331] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.502 13:30:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.502 13:30:20 -- common/autotest_common.sh@852 -- # return 0 00:06:41.502 13:30:20 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:41.760 [2024-07-10 13:30:20.974499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:41.760 [2024-07-10 13:30:20.974561] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:41.760 [2024-07-10 13:30:20.974566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:41.760 [2024-07-10 13:30:20.974574] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:41.760 13:30:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.019 13:30:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:42.019 "name": "Existed_Raid", 00:06:42.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.019 "strip_size_kb": 0, 00:06:42.019 "state": "configuring", 00:06:42.019 "raid_level": "raid1", 00:06:42.019 "superblock": false, 00:06:42.019 "num_base_bdevs": 2, 00:06:42.019 "num_base_bdevs_discovered": 0, 00:06:42.019 "num_base_bdevs_operational": 2, 00:06:42.019 "base_bdevs_list": [ 00:06:42.019 { 00:06:42.019 "name": "BaseBdev1", 00:06:42.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.019 "is_configured": false, 00:06:42.019 "data_offset": 0, 00:06:42.019 "data_size": 0 00:06:42.019 }, 00:06:42.019 { 00:06:42.019 "name": "BaseBdev2", 00:06:42.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.019 "is_configured": false, 00:06:42.019 "data_offset": 0, 00:06:42.019 "data_size": 0 00:06:42.019 } 00:06:42.019 ] 00:06:42.019 }' 00:06:42.019 13:30:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:42.019 13:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.277 13:30:21 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:42.536 [2024-07-10 13:30:21.662493] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:42.536 [2024-07-10 13:30:21.662523] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c191500 name Existed_Raid, state configuring 00:06:42.536 13:30:21 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:42.536 [2024-07-10 13:30:21.858502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:42.536 [2024-07-10 13:30:21.858549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:42.536 [2024-07-10 13:30:21.858552] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:42.536 [2024-07-10 13:30:21.858559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:42.536 13:30:21 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:42.796 [2024-07-10 13:30:22.055296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:42.796 BaseBdev1 00:06:42.796 13:30:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:42.796 13:30:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:42.796 13:30:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:42.796 13:30:22 -- common/autotest_common.sh@889 -- # local i 00:06:42.796 13:30:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:42.796 13:30:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:42.796 13:30:22 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:43.055 13:30:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:43.315 [ 00:06:43.315 { 00:06:43.315 "name": "BaseBdev1", 00:06:43.315 "aliases": [ 00:06:43.315 "8eb37cbd-3ec0-11ef-b9c4-5b09e08d4792" 00:06:43.315 ], 00:06:43.315 "product_name": "Malloc disk", 00:06:43.315 "block_size": 512, 00:06:43.315 "num_blocks": 65536, 00:06:43.315 "uuid": "8eb37cbd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:43.315 "assigned_rate_limits": { 00:06:43.315 "rw_ios_per_sec": 0, 00:06:43.315 "rw_mbytes_per_sec": 0, 00:06:43.315 "r_mbytes_per_sec": 0, 00:06:43.315 "w_mbytes_per_sec": 0 00:06:43.315 }, 00:06:43.315 "claimed": true, 00:06:43.315 "claim_type": "exclusive_write", 00:06:43.315 "zoned": false, 00:06:43.315 "supported_io_types": { 00:06:43.315 "read": true, 00:06:43.315 "write": true, 00:06:43.315 "unmap": true, 00:06:43.315 "write_zeroes": true, 00:06:43.315 "flush": true, 00:06:43.315 "reset": true, 00:06:43.315 "compare": false, 00:06:43.315 "compare_and_write": false, 00:06:43.315 "abort": true, 00:06:43.315 "nvme_admin": false, 00:06:43.315 "nvme_io": false 00:06:43.315 }, 00:06:43.315 "memory_domains": [ 00:06:43.315 { 00:06:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.315 "dma_device_type": 2 00:06:43.315 } 00:06:43.315 ], 00:06:43.315 "driver_specific": {} 00:06:43.315 } 00:06:43.315 ] 00:06:43.315 13:30:22 -- common/autotest_common.sh@895 -- # return 0 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:43.315 "name": "Existed_Raid", 00:06:43.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.315 "strip_size_kb": 0, 00:06:43.315 "state": "configuring", 00:06:43.315 "raid_level": "raid1", 00:06:43.315 "superblock": false, 00:06:43.315 "num_base_bdevs": 2, 00:06:43.315 "num_base_bdevs_discovered": 1, 00:06:43.315 "num_base_bdevs_operational": 2, 00:06:43.315 "base_bdevs_list": [ 00:06:43.315 { 00:06:43.315 "name": "BaseBdev1", 00:06:43.315 "uuid": "8eb37cbd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:43.315 "is_configured": true, 00:06:43.315 "data_offset": 0, 00:06:43.315 "data_size": 65536 00:06:43.315 }, 00:06:43.315 { 00:06:43.315 "name": "BaseBdev2", 00:06:43.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.315 "is_configured": false, 00:06:43.315 "data_offset": 0, 00:06:43.315 "data_size": 0 00:06:43.315 } 00:06:43.315 ] 00:06:43.315 }' 00:06:43.315 13:30:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:43.315 13:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.574 13:30:22 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:43.832 [2024-07-10 13:30:23.094530] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:43.832 [2024-07-10 13:30:23.094562] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c191500 name Existed_Raid, state configuring 00:06:43.832 13:30:23 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:06:43.832 13:30:23 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:44.102 [2024-07-10 13:30:23.286553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.102 [2024-07-10 13:30:23.287173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.102 [2024-07-10 13:30:23.287216] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:44.102 13:30:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.381 13:30:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:44.381 "name": "Existed_Raid", 00:06:44.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.381 "strip_size_kb": 0, 00:06:44.381 "state": "configuring", 00:06:44.381 "raid_level": "raid1", 00:06:44.381 "superblock": false, 00:06:44.381 "num_base_bdevs": 2, 00:06:44.381 "num_base_bdevs_discovered": 1, 00:06:44.381 "num_base_bdevs_operational": 2, 00:06:44.381 "base_bdevs_list": [ 00:06:44.381 { 00:06:44.381 "name": "BaseBdev1", 00:06:44.381 "uuid": "8eb37cbd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:44.381 "is_configured": true, 00:06:44.381 "data_offset": 0, 00:06:44.381 "data_size": 65536 00:06:44.381 }, 00:06:44.381 { 00:06:44.381 "name": "BaseBdev2", 00:06:44.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.381 "is_configured": false, 00:06:44.381 "data_offset": 0, 00:06:44.381 "data_size": 0 00:06:44.381 } 00:06:44.381 ] 00:06:44.381 }' 00:06:44.381 13:30:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:44.381 13:30:23 -- common/autotest_common.sh@10 -- # set +x 00:06:44.640 13:30:23 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:44.899 [2024-07-10 13:30:23.990667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.899 [2024-07-10 13:30:23.990700] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c191a00 00:06:44.899 [2024-07-10 13:30:23.990703] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:44.899 [2024-07-10 13:30:23.990738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c1f4ec0 00:06:44.899 [2024-07-10 13:30:23.990813] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c191a00 00:06:44.899 [2024-07-10 13:30:23.990816] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c191a00 00:06:44.899 [2024-07-10 13:30:23.990844] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.899 BaseBdev2 00:06:44.899 13:30:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:44.899 13:30:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:44.899 13:30:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:44.899 13:30:24 -- common/autotest_common.sh@889 -- # local i 00:06:44.899 13:30:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:44.899 13:30:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:44.899 13:30:24 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:44.899 13:30:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:45.158 [ 00:06:45.158 { 00:06:45.158 "name": "BaseBdev2", 00:06:45.158 "aliases": [ 00:06:45.158 "8fdae741-3ec0-11ef-b9c4-5b09e08d4792" 00:06:45.158 ], 00:06:45.158 "product_name": "Malloc disk", 00:06:45.158 "block_size": 512, 00:06:45.158 "num_blocks": 65536, 00:06:45.158 "uuid": "8fdae741-3ec0-11ef-b9c4-5b09e08d4792", 00:06:45.158 "assigned_rate_limits": { 00:06:45.158 "rw_ios_per_sec": 0, 00:06:45.158 "rw_mbytes_per_sec": 0, 00:06:45.158 "r_mbytes_per_sec": 0, 00:06:45.158 "w_mbytes_per_sec": 0 00:06:45.158 }, 00:06:45.158 "claimed": true, 00:06:45.158 "claim_type": "exclusive_write", 00:06:45.158 "zoned": false, 00:06:45.158 "supported_io_types": { 00:06:45.158 "read": true, 00:06:45.158 "write": true, 00:06:45.158 "unmap": true, 00:06:45.158 "write_zeroes": true, 00:06:45.158 "flush": true, 00:06:45.158 "reset": true, 00:06:45.158 "compare": false, 00:06:45.158 "compare_and_write": false, 00:06:45.158 "abort": true, 00:06:45.158 "nvme_admin": false, 00:06:45.158 "nvme_io": false 00:06:45.158 }, 00:06:45.158 "memory_domains": [ 00:06:45.158 { 00:06:45.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.158 "dma_device_type": 2 00:06:45.158 } 00:06:45.158 ], 00:06:45.158 "driver_specific": {} 00:06:45.158 } 00:06:45.158 ] 00:06:45.158 13:30:24 -- common/autotest_common.sh@895 -- # return 0 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:45.158 13:30:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.417 13:30:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:45.417 "name": "Existed_Raid", 00:06:45.417 "uuid": "8fdaed13-3ec0-11ef-b9c4-5b09e08d4792", 00:06:45.417 "strip_size_kb": 0, 00:06:45.417 "state": "online", 00:06:45.417 "raid_level": "raid1", 00:06:45.417 "superblock": false, 00:06:45.417 "num_base_bdevs": 2, 00:06:45.417 "num_base_bdevs_discovered": 2, 00:06:45.417 "num_base_bdevs_operational": 2, 00:06:45.417 "base_bdevs_list": [ 00:06:45.417 { 00:06:45.417 "name": "BaseBdev1", 00:06:45.417 "uuid": "8eb37cbd-3ec0-11ef-b9c4-5b09e08d4792", 00:06:45.417 "is_configured": true, 00:06:45.417 "data_offset": 0, 00:06:45.417 "data_size": 65536 00:06:45.417 }, 00:06:45.417 { 00:06:45.417 "name": "BaseBdev2", 00:06:45.417 "uuid": "8fdae741-3ec0-11ef-b9c4-5b09e08d4792", 00:06:45.417 "is_configured": true, 00:06:45.417 "data_offset": 0, 00:06:45.417 "data_size": 65536 00:06:45.417 } 00:06:45.417 ] 00:06:45.417 }' 00:06:45.417 13:30:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:45.417 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:06:45.676 13:30:24 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:45.936 [2024-07-10 13:30:25.078589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:45.936 13:30:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.195 13:30:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:46.195 "name": "Existed_Raid", 00:06:46.195 "uuid": "8fdaed13-3ec0-11ef-b9c4-5b09e08d4792", 00:06:46.195 "strip_size_kb": 0, 00:06:46.195 "state": "online", 00:06:46.195 "raid_level": "raid1", 00:06:46.195 "superblock": false, 00:06:46.195 "num_base_bdevs": 2, 00:06:46.195 "num_base_bdevs_discovered": 1, 00:06:46.195 "num_base_bdevs_operational": 1, 00:06:46.195 "base_bdevs_list": [ 00:06:46.195 { 00:06:46.195 "name": null, 00:06:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.195 "is_configured": false, 00:06:46.195 "data_offset": 0, 00:06:46.195 "data_size": 65536 00:06:46.195 }, 00:06:46.195 { 00:06:46.195 "name": "BaseBdev2", 00:06:46.195 "uuid": "8fdae741-3ec0-11ef-b9c4-5b09e08d4792", 00:06:46.195 "is_configured": true, 00:06:46.195 "data_offset": 0, 00:06:46.195 "data_size": 65536 00:06:46.195 } 00:06:46.195 ] 00:06:46.195 }' 00:06:46.195 13:30:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:46.195 13:30:25 -- common/autotest_common.sh@10 -- # set +x 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:46.455 13:30:25 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:46.714 [2024-07-10 13:30:25.923373] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:46.714 [2024-07-10 13:30:25.923401] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.714 [2024-07-10 13:30:25.923412] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.714 [2024-07-10 13:30:25.928098] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.714 [2024-07-10 13:30:25.928115] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c191a00 name Existed_Raid, state offline 00:06:46.714 13:30:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:46.714 13:30:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:46.714 13:30:25 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.714 13:30:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.975 13:30:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:46.975 13:30:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:46.975 13:30:26 -- bdev/bdev_raid.sh@287 -- # killprocess 48669 00:06:46.975 13:30:26 -- common/autotest_common.sh@926 -- # '[' -z 48669 ']' 00:06:46.975 13:30:26 -- common/autotest_common.sh@930 -- # kill -0 48669 00:06:46.975 13:30:26 -- common/autotest_common.sh@931 -- # uname 00:06:46.975 13:30:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:46.975 13:30:26 -- common/autotest_common.sh@934 -- # ps -c -o command 48669 00:06:46.975 13:30:26 -- common/autotest_common.sh@934 -- # tail -1 00:06:46.975 13:30:26 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:46.975 13:30:26 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:46.975 killing process with pid 48669 00:06:46.975 13:30:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48669' 00:06:46.975 13:30:26 -- common/autotest_common.sh@945 -- # kill 48669 00:06:46.975 [2024-07-10 13:30:26.154286] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.975 [2024-07-10 13:30:26.154332] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.975 13:30:26 -- common/autotest_common.sh@950 -- # wait 48669 00:06:46.975 13:30:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:46.975 00:06:46.975 real 0m6.462s 00:06:46.975 user 0m10.950s 00:06:46.975 sys 0m1.312s 00:06:46.975 13:30:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.975 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:46.975 ************************************ 00:06:46.975 END TEST raid_state_function_test 00:06:46.975 ************************************ 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:06:47.235 13:30:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:47.235 13:30:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.235 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.235 ************************************ 00:06:47.235 START TEST raid_state_function_test_sb 00:06:47.235 ************************************ 00:06:47.235 13:30:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=48865 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48865' 00:06:47.235 Process raid pid: 48865 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48865 /var/tmp/spdk-raid.sock 00:06:47.235 13:30:26 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:47.235 13:30:26 -- common/autotest_common.sh@819 -- # '[' -z 48865 ']' 00:06:47.235 13:30:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:47.235 13:30:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:47.235 13:30:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:47.235 13:30:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.235 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.235 [2024-07-10 13:30:26.378716] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.235 [2024-07-10 13:30:26.379064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:47.494 EAL: TSC is not safe to use in SMP mode 00:06:47.494 EAL: TSC is not invariant 00:06:47.494 [2024-07-10 13:30:26.814814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.753 [2024-07-10 13:30:26.904797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.753 [2024-07-10 13:30:26.905237] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.753 [2024-07-10 13:30:26.905263] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.013 13:30:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.013 13:30:27 -- common/autotest_common.sh@852 -- # return 0 00:06:48.013 13:30:27 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:48.273 [2024-07-10 13:30:27.448335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:48.273 [2024-07-10 13:30:27.448394] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:48.273 [2024-07-10 13:30:27.448398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.273 [2024-07-10 13:30:27.448405] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:48.273 13:30:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.532 13:30:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:48.532 "name": "Existed_Raid", 00:06:48.532 "uuid": "91ea8454-3ec0-11ef-b9c4-5b09e08d4792", 00:06:48.532 "strip_size_kb": 0, 00:06:48.532 "state": "configuring", 00:06:48.532 "raid_level": "raid1", 00:06:48.532 "superblock": true, 00:06:48.532 "num_base_bdevs": 2, 00:06:48.532 "num_base_bdevs_discovered": 0, 00:06:48.532 "num_base_bdevs_operational": 2, 00:06:48.532 "base_bdevs_list": [ 00:06:48.532 { 00:06:48.532 "name": "BaseBdev1", 00:06:48.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.532 "is_configured": false, 00:06:48.532 "data_offset": 0, 00:06:48.532 "data_size": 0 00:06:48.532 }, 00:06:48.532 { 00:06:48.532 "name": "BaseBdev2", 00:06:48.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.532 "is_configured": false, 00:06:48.532 "data_offset": 0, 00:06:48.532 "data_size": 0 00:06:48.532 } 00:06:48.532 ] 00:06:48.532 }' 00:06:48.532 13:30:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:48.532 13:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.792 13:30:27 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:49.052 [2024-07-10 13:30:28.144745] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:49.052 [2024-07-10 13:30:28.144773] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c6cd500 name Existed_Raid, state configuring 00:06:49.052 13:30:28 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:49.052 [2024-07-10 13:30:28.336848] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:49.052 [2024-07-10 13:30:28.336888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:49.052 [2024-07-10 13:30:28.336893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.052 [2024-07-10 13:30:28.336899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.052 13:30:28 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:49.313 [2024-07-10 13:30:28.525752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:49.313 BaseBdev1 00:06:49.313 13:30:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:49.313 13:30:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:49.313 13:30:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:49.313 13:30:28 -- common/autotest_common.sh@889 -- # local i 00:06:49.313 13:30:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:49.313 13:30:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:49.313 13:30:28 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:49.572 13:30:28 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:49.833 [ 00:06:49.833 { 00:06:49.833 "name": "BaseBdev1", 00:06:49.833 "aliases": [ 00:06:49.833 "928ecc0e-3ec0-11ef-b9c4-5b09e08d4792" 00:06:49.833 ], 00:06:49.833 "product_name": "Malloc disk", 00:06:49.833 "block_size": 512, 00:06:49.833 "num_blocks": 65536, 00:06:49.833 "uuid": "928ecc0e-3ec0-11ef-b9c4-5b09e08d4792", 00:06:49.833 "assigned_rate_limits": { 00:06:49.833 "rw_ios_per_sec": 0, 00:06:49.833 "rw_mbytes_per_sec": 0, 00:06:49.833 "r_mbytes_per_sec": 0, 00:06:49.833 "w_mbytes_per_sec": 0 00:06:49.833 }, 00:06:49.833 "claimed": true, 00:06:49.833 "claim_type": "exclusive_write", 00:06:49.833 "zoned": false, 00:06:49.833 "supported_io_types": { 00:06:49.833 "read": true, 00:06:49.833 "write": true, 00:06:49.833 "unmap": true, 00:06:49.833 "write_zeroes": true, 00:06:49.833 "flush": true, 00:06:49.833 "reset": true, 00:06:49.833 "compare": false, 00:06:49.833 "compare_and_write": false, 00:06:49.833 "abort": true, 00:06:49.833 "nvme_admin": false, 00:06:49.833 "nvme_io": false 00:06:49.833 }, 00:06:49.833 "memory_domains": [ 00:06:49.833 { 00:06:49.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.833 "dma_device_type": 2 00:06:49.833 } 00:06:49.833 ], 00:06:49.833 "driver_specific": {} 00:06:49.833 } 00:06:49.833 ] 00:06:49.833 13:30:28 -- common/autotest_common.sh@895 -- # return 0 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:49.833 13:30:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.833 13:30:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:49.833 "name": "Existed_Raid", 00:06:49.833 "uuid": "927217ef-3ec0-11ef-b9c4-5b09e08d4792", 00:06:49.833 "strip_size_kb": 0, 00:06:49.833 "state": "configuring", 00:06:49.833 "raid_level": "raid1", 00:06:49.833 "superblock": true, 00:06:49.833 "num_base_bdevs": 2, 00:06:49.833 "num_base_bdevs_discovered": 1, 00:06:49.833 "num_base_bdevs_operational": 2, 00:06:49.833 "base_bdevs_list": [ 00:06:49.833 { 00:06:49.833 "name": "BaseBdev1", 00:06:49.833 "uuid": "928ecc0e-3ec0-11ef-b9c4-5b09e08d4792", 00:06:49.833 "is_configured": true, 00:06:49.833 "data_offset": 2048, 00:06:49.833 "data_size": 63488 00:06:49.833 }, 00:06:49.833 { 00:06:49.833 "name": "BaseBdev2", 00:06:49.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.833 "is_configured": false, 00:06:49.833 "data_offset": 0, 00:06:49.833 "data_size": 0 00:06:49.833 } 00:06:49.833 ] 00:06:49.833 }' 00:06:49.833 13:30:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:49.833 13:30:29 -- common/autotest_common.sh@10 -- # set +x 00:06:50.093 13:30:29 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:50.352 [2024-07-10 13:30:29.601593] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.352 [2024-07-10 13:30:29.601625] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c6cd500 name Existed_Raid, state configuring 00:06:50.352 13:30:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:06:50.352 13:30:29 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:50.612 13:30:29 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:50.871 BaseBdev1 00:06:50.871 13:30:29 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:06:50.871 13:30:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:50.871 13:30:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:50.871 13:30:29 -- common/autotest_common.sh@889 -- # local i 00:06:50.872 13:30:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:50.872 13:30:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:50.872 13:30:29 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:50.872 13:30:30 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:51.132 [ 00:06:51.132 { 00:06:51.132 "name": "BaseBdev1", 00:06:51.132 "aliases": [ 00:06:51.132 "936bfc9f-3ec0-11ef-b9c4-5b09e08d4792" 00:06:51.132 ], 00:06:51.132 "product_name": "Malloc disk", 00:06:51.132 "block_size": 512, 00:06:51.132 "num_blocks": 65536, 00:06:51.132 "uuid": "936bfc9f-3ec0-11ef-b9c4-5b09e08d4792", 00:06:51.132 "assigned_rate_limits": { 00:06:51.132 "rw_ios_per_sec": 0, 00:06:51.132 "rw_mbytes_per_sec": 0, 00:06:51.132 "r_mbytes_per_sec": 0, 00:06:51.132 "w_mbytes_per_sec": 0 00:06:51.132 }, 00:06:51.132 "claimed": false, 00:06:51.132 "zoned": false, 00:06:51.132 "supported_io_types": { 00:06:51.132 "read": true, 00:06:51.132 "write": true, 00:06:51.132 "unmap": true, 00:06:51.132 "write_zeroes": true, 00:06:51.132 "flush": true, 00:06:51.132 "reset": true, 00:06:51.132 "compare": false, 00:06:51.132 "compare_and_write": false, 00:06:51.132 "abort": true, 00:06:51.132 "nvme_admin": false, 00:06:51.132 "nvme_io": false 00:06:51.132 }, 00:06:51.132 "memory_domains": [ 00:06:51.132 { 00:06:51.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.132 "dma_device_type": 2 00:06:51.132 } 00:06:51.132 ], 00:06:51.132 "driver_specific": {} 00:06:51.132 } 00:06:51.132 ] 00:06:51.132 13:30:30 -- common/autotest_common.sh@895 -- # return 0 00:06:51.132 13:30:30 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:51.392 [2024-07-10 13:30:30.554885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.392 [2024-07-10 13:30:30.555323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.392 [2024-07-10 13:30:30.555361] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.392 13:30:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.651 13:30:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:51.651 "name": "Existed_Raid", 00:06:51.651 "uuid": "93c489f4-3ec0-11ef-b9c4-5b09e08d4792", 00:06:51.651 "strip_size_kb": 0, 00:06:51.651 "state": "configuring", 00:06:51.651 "raid_level": "raid1", 00:06:51.651 "superblock": true, 00:06:51.651 "num_base_bdevs": 2, 00:06:51.651 "num_base_bdevs_discovered": 1, 00:06:51.651 "num_base_bdevs_operational": 2, 00:06:51.651 "base_bdevs_list": [ 00:06:51.651 { 00:06:51.651 "name": "BaseBdev1", 00:06:51.651 "uuid": "936bfc9f-3ec0-11ef-b9c4-5b09e08d4792", 00:06:51.651 "is_configured": true, 00:06:51.651 "data_offset": 2048, 00:06:51.651 "data_size": 63488 00:06:51.651 }, 00:06:51.651 { 00:06:51.651 "name": "BaseBdev2", 00:06:51.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.651 "is_configured": false, 00:06:51.651 "data_offset": 0, 00:06:51.651 "data_size": 0 00:06:51.651 } 00:06:51.651 ] 00:06:51.651 }' 00:06:51.651 13:30:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:51.651 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:06:51.912 13:30:31 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.912 [2024-07-10 13:30:31.231416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.912 [2024-07-10 13:30:31.231475] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c6cda00 00:06:51.912 [2024-07-10 13:30:31.231481] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:51.912 [2024-07-10 13:30:31.231499] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c730ec0 00:06:51.912 [2024-07-10 13:30:31.231528] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c6cda00 00:06:51.912 [2024-07-10 13:30:31.231531] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c6cda00 00:06:51.912 [2024-07-10 13:30:31.231546] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.912 BaseBdev2 00:06:51.912 13:30:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:51.912 13:30:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:51.912 13:30:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:51.912 13:30:31 -- common/autotest_common.sh@889 -- # local i 00:06:51.912 13:30:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:51.912 13:30:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:51.912 13:30:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:52.172 13:30:31 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:52.431 [ 00:06:52.431 { 00:06:52.431 "name": "BaseBdev2", 00:06:52.431 "aliases": [ 00:06:52.431 "942bc115-3ec0-11ef-b9c4-5b09e08d4792" 00:06:52.431 ], 00:06:52.431 "product_name": "Malloc disk", 00:06:52.431 "block_size": 512, 00:06:52.431 "num_blocks": 65536, 00:06:52.431 "uuid": "942bc115-3ec0-11ef-b9c4-5b09e08d4792", 00:06:52.431 "assigned_rate_limits": { 00:06:52.431 "rw_ios_per_sec": 0, 00:06:52.431 "rw_mbytes_per_sec": 0, 00:06:52.431 "r_mbytes_per_sec": 0, 00:06:52.431 "w_mbytes_per_sec": 0 00:06:52.431 }, 00:06:52.431 "claimed": true, 00:06:52.431 "claim_type": "exclusive_write", 00:06:52.431 "zoned": false, 00:06:52.431 "supported_io_types": { 00:06:52.431 "read": true, 00:06:52.432 "write": true, 00:06:52.432 "unmap": true, 00:06:52.432 "write_zeroes": true, 00:06:52.432 "flush": true, 00:06:52.432 "reset": true, 00:06:52.432 "compare": false, 00:06:52.432 "compare_and_write": false, 00:06:52.432 "abort": true, 00:06:52.432 "nvme_admin": false, 00:06:52.432 "nvme_io": false 00:06:52.432 }, 00:06:52.432 "memory_domains": [ 00:06:52.432 { 00:06:52.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.432 "dma_device_type": 2 00:06:52.432 } 00:06:52.432 ], 00:06:52.432 "driver_specific": {} 00:06:52.432 } 00:06:52.432 ] 00:06:52.432 13:30:31 -- common/autotest_common.sh@895 -- # return 0 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:52.432 13:30:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.691 13:30:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:52.691 "name": "Existed_Raid", 00:06:52.691 "uuid": "93c489f4-3ec0-11ef-b9c4-5b09e08d4792", 00:06:52.691 "strip_size_kb": 0, 00:06:52.691 "state": "online", 00:06:52.691 "raid_level": "raid1", 00:06:52.691 "superblock": true, 00:06:52.691 "num_base_bdevs": 2, 00:06:52.691 "num_base_bdevs_discovered": 2, 00:06:52.691 "num_base_bdevs_operational": 2, 00:06:52.691 "base_bdevs_list": [ 00:06:52.692 { 00:06:52.692 "name": "BaseBdev1", 00:06:52.692 "uuid": "936bfc9f-3ec0-11ef-b9c4-5b09e08d4792", 00:06:52.692 "is_configured": true, 00:06:52.692 "data_offset": 2048, 00:06:52.692 "data_size": 63488 00:06:52.692 }, 00:06:52.692 { 00:06:52.692 "name": "BaseBdev2", 00:06:52.692 "uuid": "942bc115-3ec0-11ef-b9c4-5b09e08d4792", 00:06:52.692 "is_configured": true, 00:06:52.692 "data_offset": 2048, 00:06:52.692 "data_size": 63488 00:06:52.692 } 00:06:52.692 ] 00:06:52.692 }' 00:06:52.692 13:30:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:52.692 13:30:31 -- common/autotest_common.sh@10 -- # set +x 00:06:52.951 13:30:32 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:53.213 [2024-07-10 13:30:32.339979] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:53.213 13:30:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.475 13:30:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:53.475 "name": "Existed_Raid", 00:06:53.475 "uuid": "93c489f4-3ec0-11ef-b9c4-5b09e08d4792", 00:06:53.475 "strip_size_kb": 0, 00:06:53.475 "state": "online", 00:06:53.475 "raid_level": "raid1", 00:06:53.475 "superblock": true, 00:06:53.475 "num_base_bdevs": 2, 00:06:53.475 "num_base_bdevs_discovered": 1, 00:06:53.475 "num_base_bdevs_operational": 1, 00:06:53.475 "base_bdevs_list": [ 00:06:53.475 { 00:06:53.475 "name": null, 00:06:53.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.475 "is_configured": false, 00:06:53.475 "data_offset": 2048, 00:06:53.475 "data_size": 63488 00:06:53.475 }, 00:06:53.475 { 00:06:53.475 "name": "BaseBdev2", 00:06:53.475 "uuid": "942bc115-3ec0-11ef-b9c4-5b09e08d4792", 00:06:53.475 "is_configured": true, 00:06:53.475 "data_offset": 2048, 00:06:53.475 "data_size": 63488 00:06:53.475 } 00:06:53.475 ] 00:06:53.475 }' 00:06:53.475 13:30:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:53.475 13:30:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.735 13:30:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:53.735 13:30:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:53.735 13:30:32 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:53.735 13:30:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:53.995 [2024-07-10 13:30:33.273293] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:53.995 [2024-07-10 13:30:33.273318] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.995 [2024-07-10 13:30:33.273331] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.995 [2024-07-10 13:30:33.278166] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.995 [2024-07-10 13:30:33.278192] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c6cda00 name Existed_Raid, state offline 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:53.995 13:30:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:54.253 13:30:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:54.253 13:30:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:54.253 13:30:33 -- bdev/bdev_raid.sh@287 -- # killprocess 48865 00:06:54.253 13:30:33 -- common/autotest_common.sh@926 -- # '[' -z 48865 ']' 00:06:54.253 13:30:33 -- common/autotest_common.sh@930 -- # kill -0 48865 00:06:54.253 13:30:33 -- common/autotest_common.sh@931 -- # uname 00:06:54.253 13:30:33 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:54.253 13:30:33 -- common/autotest_common.sh@934 -- # ps -c -o command 48865 00:06:54.253 13:30:33 -- common/autotest_common.sh@934 -- # tail -1 00:06:54.253 13:30:33 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:54.253 13:30:33 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:54.253 killing process with pid 48865 00:06:54.253 13:30:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48865' 00:06:54.253 13:30:33 -- common/autotest_common.sh@945 -- # kill 48865 00:06:54.253 [2024-07-10 13:30:33.494909] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.253 [2024-07-10 13:30:33.494949] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.253 13:30:33 -- common/autotest_common.sh@950 -- # wait 48865 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:54.513 00:06:54.513 real 0m7.285s 00:06:54.513 user 0m12.387s 00:06:54.513 sys 0m1.492s 00:06:54.513 13:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.513 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.513 ************************************ 00:06:54.513 END TEST raid_state_function_test_sb 00:06:54.513 ************************************ 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:06:54.513 13:30:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:54.513 13:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.513 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.513 ************************************ 00:06:54.513 START TEST raid_superblock_test 00:06:54.513 ************************************ 00:06:54.513 13:30:33 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=49064 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49064 /var/tmp/spdk-raid.sock 00:06:54.513 13:30:33 -- common/autotest_common.sh@819 -- # '[' -z 49064 ']' 00:06:54.513 13:30:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:54.513 13:30:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.513 13:30:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:54.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:54.513 13:30:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.513 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.513 13:30:33 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:54.513 [2024-07-10 13:30:33.700949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.513 [2024-07-10 13:30:33.701302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:55.081 EAL: TSC is not safe to use in SMP mode 00:06:55.081 EAL: TSC is not invariant 00:06:55.081 [2024-07-10 13:30:34.133173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.081 [2024-07-10 13:30:34.230024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.081 [2024-07-10 13:30:34.230519] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.081 [2024-07-10 13:30:34.230529] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.341 13:30:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:55.341 13:30:34 -- common/autotest_common.sh@852 -- # return 0 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:55.341 13:30:34 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:55.601 malloc1 00:06:55.601 13:30:34 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:55.861 [2024-07-10 13:30:35.046446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:55.861 [2024-07-10 13:30:35.046503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.861 [2024-07-10 13:30:35.047025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d935780 00:06:55.861 [2024-07-10 13:30:35.047051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.861 [2024-07-10 13:30:35.047738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.861 [2024-07-10 13:30:35.047769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:55.861 pt1 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:55.861 13:30:35 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:56.120 malloc2 00:06:56.120 13:30:35 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:56.120 [2024-07-10 13:30:35.434678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:56.120 [2024-07-10 13:30:35.434736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.120 [2024-07-10 13:30:35.434761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d935c80 00:06:56.120 [2024-07-10 13:30:35.434768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.120 [2024-07-10 13:30:35.435279] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.120 [2024-07-10 13:30:35.435308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:56.120 pt2 00:06:56.120 13:30:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:56.120 13:30:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:56.120 13:30:35 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:06:56.380 [2024-07-10 13:30:35.634821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:56.380 [2024-07-10 13:30:35.635263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:56.380 [2024-07-10 13:30:35.635320] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d935f00 00:06:56.380 [2024-07-10 13:30:35.635325] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:56.380 [2024-07-10 13:30:35.635358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d998e20 00:06:56.380 [2024-07-10 13:30:35.635424] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d935f00 00:06:56.380 [2024-07-10 13:30:35.635428] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d935f00 00:06:56.380 [2024-07-10 13:30:35.635451] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.380 13:30:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:56.640 13:30:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:56.640 "name": "raid_bdev1", 00:06:56.640 "uuid": "96cbacba-3ec0-11ef-b9c4-5b09e08d4792", 00:06:56.640 "strip_size_kb": 0, 00:06:56.640 "state": "online", 00:06:56.640 "raid_level": "raid1", 00:06:56.640 "superblock": true, 00:06:56.640 "num_base_bdevs": 2, 00:06:56.640 "num_base_bdevs_discovered": 2, 00:06:56.640 "num_base_bdevs_operational": 2, 00:06:56.640 "base_bdevs_list": [ 00:06:56.640 { 00:06:56.640 "name": "pt1", 00:06:56.640 "uuid": "35a01f86-5da5-f058-9d56-90b35e621403", 00:06:56.640 "is_configured": true, 00:06:56.640 "data_offset": 2048, 00:06:56.640 "data_size": 63488 00:06:56.640 }, 00:06:56.640 { 00:06:56.640 "name": "pt2", 00:06:56.640 "uuid": "48fb1041-2c8f-4859-bee4-b857e56c02ef", 00:06:56.640 "is_configured": true, 00:06:56.640 "data_offset": 2048, 00:06:56.640 "data_size": 63488 00:06:56.640 } 00:06:56.640 ] 00:06:56.640 }' 00:06:56.640 13:30:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:56.640 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.900 13:30:36 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:56.900 13:30:36 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:06:57.159 [2024-07-10 13:30:36.307227] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.159 13:30:36 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=96cbacba-3ec0-11ef-b9c4-5b09e08d4792 00:06:57.159 13:30:36 -- bdev/bdev_raid.sh@380 -- # '[' -z 96cbacba-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:06:57.159 13:30:36 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:57.159 [2024-07-10 13:30:36.499328] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:57.159 [2024-07-10 13:30:36.499354] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.159 [2024-07-10 13:30:36.499375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.159 [2024-07-10 13:30:36.499387] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.160 [2024-07-10 13:30:36.499390] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d935f00 name raid_bdev1, state offline 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:57.420 13:30:36 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:57.679 13:30:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:57.679 13:30:36 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:57.939 13:30:37 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:57.939 13:30:37 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:57.939 13:30:37 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:06:57.939 13:30:37 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:06:57.939 13:30:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.939 13:30:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:06:57.939 13:30:37 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.939 13:30:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.939 13:30:37 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.939 13:30:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.939 13:30:37 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.939 13:30:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.939 13:30:37 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.939 13:30:37 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.939 13:30:37 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:06:58.198 [2024-07-10 13:30:37.427908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:58.198 [2024-07-10 13:30:37.428385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:58.198 [2024-07-10 13:30:37.428408] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:06:58.198 [2024-07-10 13:30:37.428445] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:06:58.198 [2024-07-10 13:30:37.428453] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:58.198 [2024-07-10 13:30:37.428457] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d935c80 name raid_bdev1, state configuring 00:06:58.198 request: 00:06:58.198 { 00:06:58.198 "name": "raid_bdev1", 00:06:58.198 "raid_level": "raid1", 00:06:58.198 "base_bdevs": [ 00:06:58.198 "malloc1", 00:06:58.198 "malloc2" 00:06:58.198 ], 00:06:58.198 "superblock": false, 00:06:58.198 "method": "bdev_raid_create", 00:06:58.198 "req_id": 1 00:06:58.198 } 00:06:58.198 Got JSON-RPC error response 00:06:58.198 response: 00:06:58.198 { 00:06:58.198 "code": -17, 00:06:58.198 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:58.198 } 00:06:58.198 13:30:37 -- common/autotest_common.sh@643 -- # es=1 00:06:58.198 13:30:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.198 13:30:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:58.198 13:30:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.198 13:30:37 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.198 13:30:37 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:06:58.458 13:30:37 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:06:58.458 13:30:37 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:06:58.458 13:30:37 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:58.717 [2024-07-10 13:30:37.820117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:58.717 [2024-07-10 13:30:37.820171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.717 [2024-07-10 13:30:37.820196] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d935780 00:06:58.717 [2024-07-10 13:30:37.820202] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.717 [2024-07-10 13:30:37.820736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.717 [2024-07-10 13:30:37.820764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:58.717 [2024-07-10 13:30:37.820784] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:06:58.717 [2024-07-10 13:30:37.820794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:58.717 pt1 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:58.717 13:30:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:58.718 13:30:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.718 13:30:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.718 13:30:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:58.718 "name": "raid_bdev1", 00:06:58.718 "uuid": "96cbacba-3ec0-11ef-b9c4-5b09e08d4792", 00:06:58.718 "strip_size_kb": 0, 00:06:58.718 "state": "configuring", 00:06:58.718 "raid_level": "raid1", 00:06:58.718 "superblock": true, 00:06:58.718 "num_base_bdevs": 2, 00:06:58.718 "num_base_bdevs_discovered": 1, 00:06:58.718 "num_base_bdevs_operational": 2, 00:06:58.718 "base_bdevs_list": [ 00:06:58.718 { 00:06:58.718 "name": "pt1", 00:06:58.718 "uuid": "35a01f86-5da5-f058-9d56-90b35e621403", 00:06:58.718 "is_configured": true, 00:06:58.718 "data_offset": 2048, 00:06:58.718 "data_size": 63488 00:06:58.718 }, 00:06:58.718 { 00:06:58.718 "name": null, 00:06:58.718 "uuid": "48fb1041-2c8f-4859-bee4-b857e56c02ef", 00:06:58.718 "is_configured": false, 00:06:58.718 "data_offset": 2048, 00:06:58.718 "data_size": 63488 00:06:58.718 } 00:06:58.718 ] 00:06:58.718 }' 00:06:58.718 13:30:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:58.718 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.286 13:30:38 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:06:59.286 13:30:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:06:59.286 13:30:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:59.286 13:30:38 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.286 [2024-07-10 13:30:38.512528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.286 [2024-07-10 13:30:38.512578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.286 [2024-07-10 13:30:38.512602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d935f00 00:06:59.286 [2024-07-10 13:30:38.512608] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.286 [2024-07-10 13:30:38.512691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.286 [2024-07-10 13:30:38.512698] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.286 [2024-07-10 13:30:38.512713] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:06:59.286 [2024-07-10 13:30:38.512719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:59.286 [2024-07-10 13:30:38.512738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d936180 00:06:59.286 [2024-07-10 13:30:38.512741] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:59.286 [2024-07-10 13:30:38.512756] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d998e20 00:06:59.287 [2024-07-10 13:30:38.512794] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d936180 00:06:59.287 [2024-07-10 13:30:38.512797] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d936180 00:06:59.287 [2024-07-10 13:30:38.512813] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.287 pt2 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.287 13:30:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:59.546 13:30:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:59.546 "name": "raid_bdev1", 00:06:59.546 "uuid": "96cbacba-3ec0-11ef-b9c4-5b09e08d4792", 00:06:59.546 "strip_size_kb": 0, 00:06:59.546 "state": "online", 00:06:59.546 "raid_level": "raid1", 00:06:59.546 "superblock": true, 00:06:59.546 "num_base_bdevs": 2, 00:06:59.546 "num_base_bdevs_discovered": 2, 00:06:59.546 "num_base_bdevs_operational": 2, 00:06:59.546 "base_bdevs_list": [ 00:06:59.546 { 00:06:59.546 "name": "pt1", 00:06:59.546 "uuid": "35a01f86-5da5-f058-9d56-90b35e621403", 00:06:59.546 "is_configured": true, 00:06:59.546 "data_offset": 2048, 00:06:59.546 "data_size": 63488 00:06:59.546 }, 00:06:59.546 { 00:06:59.546 "name": "pt2", 00:06:59.546 "uuid": "48fb1041-2c8f-4859-bee4-b857e56c02ef", 00:06:59.546 "is_configured": true, 00:06:59.546 "data_offset": 2048, 00:06:59.546 "data_size": 63488 00:06:59.546 } 00:06:59.546 ] 00:06:59.546 }' 00:06:59.546 13:30:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:59.546 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.806 13:30:39 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:59.806 13:30:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:00.066 [2024-07-10 13:30:39.180945] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@430 -- # '[' 96cbacba-3ec0-11ef-b9c4-5b09e08d4792 '!=' 96cbacba-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:00.066 [2024-07-10 13:30:39.377037] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.066 13:30:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.332 13:30:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:00.332 "name": "raid_bdev1", 00:07:00.332 "uuid": "96cbacba-3ec0-11ef-b9c4-5b09e08d4792", 00:07:00.332 "strip_size_kb": 0, 00:07:00.332 "state": "online", 00:07:00.332 "raid_level": "raid1", 00:07:00.332 "superblock": true, 00:07:00.332 "num_base_bdevs": 2, 00:07:00.332 "num_base_bdevs_discovered": 1, 00:07:00.332 "num_base_bdevs_operational": 1, 00:07:00.332 "base_bdevs_list": [ 00:07:00.332 { 00:07:00.332 "name": null, 00:07:00.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.332 "is_configured": false, 00:07:00.332 "data_offset": 2048, 00:07:00.332 "data_size": 63488 00:07:00.332 }, 00:07:00.332 { 00:07:00.332 "name": "pt2", 00:07:00.332 "uuid": "48fb1041-2c8f-4859-bee4-b857e56c02ef", 00:07:00.332 "is_configured": true, 00:07:00.332 "data_offset": 2048, 00:07:00.332 "data_size": 63488 00:07:00.332 } 00:07:00.332 ] 00:07:00.332 }' 00:07:00.332 13:30:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:00.332 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.598 13:30:39 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:00.858 [2024-07-10 13:30:40.033422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:00.858 [2024-07-10 13:30:40.033442] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.858 [2024-07-10 13:30:40.033455] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.858 [2024-07-10 13:30:40.033464] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.858 [2024-07-10 13:30:40.033467] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d936180 name raid_bdev1, state offline 00:07:00.858 13:30:40 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:07:00.858 13:30:40 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@462 -- # i=1 00:07:01.117 13:30:40 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:01.377 [2024-07-10 13:30:40.637791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:01.377 [2024-07-10 13:30:40.637858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.377 [2024-07-10 13:30:40.637884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d935f00 00:07:01.377 [2024-07-10 13:30:40.637891] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.377 [2024-07-10 13:30:40.638403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.377 [2024-07-10 13:30:40.638432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:01.377 [2024-07-10 13:30:40.638453] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:01.377 [2024-07-10 13:30:40.638463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:01.377 [2024-07-10 13:30:40.638483] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d936180 00:07:01.377 [2024-07-10 13:30:40.638486] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:01.377 [2024-07-10 13:30:40.638504] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d998e20 00:07:01.377 [2024-07-10 13:30:40.638539] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d936180 00:07:01.377 [2024-07-10 13:30:40.638543] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d936180 00:07:01.377 [2024-07-10 13:30:40.638561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.377 pt2 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.377 13:30:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.637 13:30:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:01.637 "name": "raid_bdev1", 00:07:01.637 "uuid": "96cbacba-3ec0-11ef-b9c4-5b09e08d4792", 00:07:01.637 "strip_size_kb": 0, 00:07:01.637 "state": "online", 00:07:01.637 "raid_level": "raid1", 00:07:01.637 "superblock": true, 00:07:01.637 "num_base_bdevs": 2, 00:07:01.637 "num_base_bdevs_discovered": 1, 00:07:01.637 "num_base_bdevs_operational": 1, 00:07:01.637 "base_bdevs_list": [ 00:07:01.637 { 00:07:01.637 "name": null, 00:07:01.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.637 "is_configured": false, 00:07:01.637 "data_offset": 2048, 00:07:01.637 "data_size": 63488 00:07:01.637 }, 00:07:01.637 { 00:07:01.637 "name": "pt2", 00:07:01.638 "uuid": "48fb1041-2c8f-4859-bee4-b857e56c02ef", 00:07:01.638 "is_configured": true, 00:07:01.638 "data_offset": 2048, 00:07:01.638 "data_size": 63488 00:07:01.638 } 00:07:01.638 ] 00:07:01.638 }' 00:07:01.638 13:30:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:01.638 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.896 13:30:41 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:07:01.896 13:30:41 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:01.896 13:30:41 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:07:02.154 [2024-07-10 13:30:41.354237] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.154 13:30:41 -- bdev/bdev_raid.sh@506 -- # '[' 96cbacba-3ec0-11ef-b9c4-5b09e08d4792 '!=' 96cbacba-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:02.154 13:30:41 -- bdev/bdev_raid.sh@511 -- # killprocess 49064 00:07:02.154 13:30:41 -- common/autotest_common.sh@926 -- # '[' -z 49064 ']' 00:07:02.154 13:30:41 -- common/autotest_common.sh@930 -- # kill -0 49064 00:07:02.154 13:30:41 -- common/autotest_common.sh@931 -- # uname 00:07:02.154 13:30:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:02.154 13:30:41 -- common/autotest_common.sh@934 -- # ps -c -o command 49064 00:07:02.154 13:30:41 -- common/autotest_common.sh@934 -- # tail -1 00:07:02.154 13:30:41 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:02.154 13:30:41 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:02.154 killing process with pid 49064 00:07:02.154 13:30:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49064' 00:07:02.154 13:30:41 -- common/autotest_common.sh@945 -- # kill 49064 00:07:02.154 [2024-07-10 13:30:41.387773] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.154 [2024-07-10 13:30:41.387791] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.154 [2024-07-10 13:30:41.387800] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.154 [2024-07-10 13:30:41.387804] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d936180 name raid_bdev1, state offline 00:07:02.154 13:30:41 -- common/autotest_common.sh@950 -- # wait 49064 00:07:02.154 [2024-07-10 13:30:41.397332] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:02.415 00:07:02.415 real 0m7.853s 00:07:02.415 user 0m13.447s 00:07:02.415 sys 0m1.567s 00:07:02.415 13:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.415 13:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.415 ************************************ 00:07:02.415 END TEST raid_superblock_test 00:07:02.415 ************************************ 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:02.415 13:30:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:02.415 13:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.415 13:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.415 ************************************ 00:07:02.415 START TEST raid_state_function_test 00:07:02.415 ************************************ 00:07:02.415 13:30:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=49279 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49279' 00:07:02.415 Process raid pid: 49279 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:02.415 13:30:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49279 /var/tmp/spdk-raid.sock 00:07:02.416 13:30:41 -- common/autotest_common.sh@819 -- # '[' -z 49279 ']' 00:07:02.416 13:30:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:02.416 13:30:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:02.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:02.416 13:30:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:02.416 13:30:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:02.416 13:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.416 [2024-07-10 13:30:41.616256] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.416 [2024-07-10 13:30:41.616606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:02.992 EAL: TSC is not safe to use in SMP mode 00:07:02.992 EAL: TSC is not invariant 00:07:02.992 [2024-07-10 13:30:42.049611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.992 [2024-07-10 13:30:42.137331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.992 [2024-07-10 13:30:42.137774] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.992 [2024-07-10 13:30:42.137783] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.252 13:30:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.252 13:30:42 -- common/autotest_common.sh@852 -- # return 0 00:07:03.252 13:30:42 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:03.511 [2024-07-10 13:30:42.705140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.511 [2024-07-10 13:30:42.705205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.511 [2024-07-10 13:30:42.705210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.511 [2024-07-10 13:30:42.705217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.511 [2024-07-10 13:30:42.705220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:03.511 [2024-07-10 13:30:42.705227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:03.511 13:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.770 13:30:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:03.770 "name": "Existed_Raid", 00:07:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.770 "strip_size_kb": 64, 00:07:03.770 "state": "configuring", 00:07:03.770 "raid_level": "raid0", 00:07:03.770 "superblock": false, 00:07:03.770 "num_base_bdevs": 3, 00:07:03.770 "num_base_bdevs_discovered": 0, 00:07:03.770 "num_base_bdevs_operational": 3, 00:07:03.770 "base_bdevs_list": [ 00:07:03.770 { 00:07:03.770 "name": "BaseBdev1", 00:07:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.770 "is_configured": false, 00:07:03.770 "data_offset": 0, 00:07:03.770 "data_size": 0 00:07:03.770 }, 00:07:03.770 { 00:07:03.770 "name": "BaseBdev2", 00:07:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.770 "is_configured": false, 00:07:03.770 "data_offset": 0, 00:07:03.770 "data_size": 0 00:07:03.770 }, 00:07:03.770 { 00:07:03.770 "name": "BaseBdev3", 00:07:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.770 "is_configured": false, 00:07:03.770 "data_offset": 0, 00:07:03.770 "data_size": 0 00:07:03.770 } 00:07:03.770 ] 00:07:03.770 }' 00:07:03.770 13:30:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:03.770 13:30:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.028 13:30:43 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:04.028 [2024-07-10 13:30:43.345511] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.028 [2024-07-10 13:30:43.345538] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bebb500 name Existed_Raid, state configuring 00:07:04.028 13:30:43 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:04.286 [2024-07-10 13:30:43.537650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.286 [2024-07-10 13:30:43.537703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.286 [2024-07-10 13:30:43.537707] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.286 [2024-07-10 13:30:43.537714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.286 [2024-07-10 13:30:43.537716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:04.286 [2024-07-10 13:30:43.537722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:04.286 13:30:43 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:04.545 [2024-07-10 13:30:43.706515] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.545 BaseBdev1 00:07:04.545 13:30:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:04.545 13:30:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:04.545 13:30:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:04.545 13:30:43 -- common/autotest_common.sh@889 -- # local i 00:07:04.545 13:30:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:04.545 13:30:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:04.545 13:30:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:04.803 13:30:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:04.803 [ 00:07:04.803 { 00:07:04.803 "name": "BaseBdev1", 00:07:04.803 "aliases": [ 00:07:04.803 "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792" 00:07:04.803 ], 00:07:04.803 "product_name": "Malloc disk", 00:07:04.803 "block_size": 512, 00:07:04.803 "num_blocks": 65536, 00:07:04.803 "uuid": "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792", 00:07:04.803 "assigned_rate_limits": { 00:07:04.803 "rw_ios_per_sec": 0, 00:07:04.803 "rw_mbytes_per_sec": 0, 00:07:04.803 "r_mbytes_per_sec": 0, 00:07:04.803 "w_mbytes_per_sec": 0 00:07:04.803 }, 00:07:04.803 "claimed": true, 00:07:04.803 "claim_type": "exclusive_write", 00:07:04.803 "zoned": false, 00:07:04.803 "supported_io_types": { 00:07:04.803 "read": true, 00:07:04.803 "write": true, 00:07:04.803 "unmap": true, 00:07:04.803 "write_zeroes": true, 00:07:04.803 "flush": true, 00:07:04.803 "reset": true, 00:07:04.804 "compare": false, 00:07:04.804 "compare_and_write": false, 00:07:04.804 "abort": true, 00:07:04.804 "nvme_admin": false, 00:07:04.804 "nvme_io": false 00:07:04.804 }, 00:07:04.804 "memory_domains": [ 00:07:04.804 { 00:07:04.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.804 "dma_device_type": 2 00:07:04.804 } 00:07:04.804 ], 00:07:04.804 "driver_specific": {} 00:07:04.804 } 00:07:04.804 ] 00:07:04.804 13:30:44 -- common/autotest_common.sh@895 -- # return 0 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.804 13:30:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.062 13:30:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:05.062 "name": "Existed_Raid", 00:07:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.062 "strip_size_kb": 64, 00:07:05.062 "state": "configuring", 00:07:05.062 "raid_level": "raid0", 00:07:05.062 "superblock": false, 00:07:05.062 "num_base_bdevs": 3, 00:07:05.062 "num_base_bdevs_discovered": 1, 00:07:05.062 "num_base_bdevs_operational": 3, 00:07:05.062 "base_bdevs_list": [ 00:07:05.062 { 00:07:05.062 "name": "BaseBdev1", 00:07:05.062 "uuid": "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792", 00:07:05.062 "is_configured": true, 00:07:05.062 "data_offset": 0, 00:07:05.062 "data_size": 65536 00:07:05.062 }, 00:07:05.062 { 00:07:05.062 "name": "BaseBdev2", 00:07:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.062 "is_configured": false, 00:07:05.062 "data_offset": 0, 00:07:05.062 "data_size": 0 00:07:05.062 }, 00:07:05.062 { 00:07:05.062 "name": "BaseBdev3", 00:07:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.062 "is_configured": false, 00:07:05.062 "data_offset": 0, 00:07:05.062 "data_size": 0 00:07:05.062 } 00:07:05.062 ] 00:07:05.062 }' 00:07:05.062 13:30:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:05.062 13:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.319 13:30:44 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:05.576 [2024-07-10 13:30:44.750355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.576 [2024-07-10 13:30:44.750384] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bebb500 name Existed_Raid, state configuring 00:07:05.576 13:30:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:05.576 13:30:44 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:05.835 [2024-07-10 13:30:44.946473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.835 [2024-07-10 13:30:44.947080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.835 [2024-07-10 13:30:44.947122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.835 [2024-07-10 13:30:44.947126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:05.835 [2024-07-10 13:30:44.947132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.835 13:30:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.835 13:30:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:05.835 "name": "Existed_Raid", 00:07:05.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.835 "strip_size_kb": 64, 00:07:05.835 "state": "configuring", 00:07:05.835 "raid_level": "raid0", 00:07:05.835 "superblock": false, 00:07:05.835 "num_base_bdevs": 3, 00:07:05.835 "num_base_bdevs_discovered": 1, 00:07:05.835 "num_base_bdevs_operational": 3, 00:07:05.835 "base_bdevs_list": [ 00:07:05.835 { 00:07:05.835 "name": "BaseBdev1", 00:07:05.835 "uuid": "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792", 00:07:05.835 "is_configured": true, 00:07:05.835 "data_offset": 0, 00:07:05.835 "data_size": 65536 00:07:05.835 }, 00:07:05.835 { 00:07:05.835 "name": "BaseBdev2", 00:07:05.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.835 "is_configured": false, 00:07:05.835 "data_offset": 0, 00:07:05.835 "data_size": 0 00:07:05.835 }, 00:07:05.835 { 00:07:05.835 "name": "BaseBdev3", 00:07:05.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.835 "is_configured": false, 00:07:05.835 "data_offset": 0, 00:07:05.835 "data_size": 0 00:07:05.835 } 00:07:05.835 ] 00:07:05.835 }' 00:07:05.835 13:30:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:05.835 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 13:30:45 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:06.352 [2024-07-10 13:30:45.574967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.352 BaseBdev2 00:07:06.352 13:30:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:06.352 13:30:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:06.352 13:30:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:06.352 13:30:45 -- common/autotest_common.sh@889 -- # local i 00:07:06.352 13:30:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:06.352 13:30:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:06.352 13:30:45 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:06.611 13:30:45 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:06.870 [ 00:07:06.870 { 00:07:06.870 "name": "BaseBdev2", 00:07:06.870 "aliases": [ 00:07:06.870 "9cb867fb-3ec0-11ef-b9c4-5b09e08d4792" 00:07:06.870 ], 00:07:06.870 "product_name": "Malloc disk", 00:07:06.870 "block_size": 512, 00:07:06.870 "num_blocks": 65536, 00:07:06.870 "uuid": "9cb867fb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:06.870 "assigned_rate_limits": { 00:07:06.870 "rw_ios_per_sec": 0, 00:07:06.870 "rw_mbytes_per_sec": 0, 00:07:06.870 "r_mbytes_per_sec": 0, 00:07:06.870 "w_mbytes_per_sec": 0 00:07:06.870 }, 00:07:06.870 "claimed": true, 00:07:06.870 "claim_type": "exclusive_write", 00:07:06.870 "zoned": false, 00:07:06.870 "supported_io_types": { 00:07:06.870 "read": true, 00:07:06.870 "write": true, 00:07:06.870 "unmap": true, 00:07:06.870 "write_zeroes": true, 00:07:06.870 "flush": true, 00:07:06.870 "reset": true, 00:07:06.870 "compare": false, 00:07:06.870 "compare_and_write": false, 00:07:06.870 "abort": true, 00:07:06.870 "nvme_admin": false, 00:07:06.870 "nvme_io": false 00:07:06.870 }, 00:07:06.870 "memory_domains": [ 00:07:06.870 { 00:07:06.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.870 "dma_device_type": 2 00:07:06.870 } 00:07:06.870 ], 00:07:06.870 "driver_specific": {} 00:07:06.870 } 00:07:06.870 ] 00:07:06.870 13:30:45 -- common/autotest_common.sh@895 -- # return 0 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:06.870 13:30:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.870 13:30:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:06.870 "name": "Existed_Raid", 00:07:06.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.870 "strip_size_kb": 64, 00:07:06.870 "state": "configuring", 00:07:06.870 "raid_level": "raid0", 00:07:06.870 "superblock": false, 00:07:06.870 "num_base_bdevs": 3, 00:07:06.870 "num_base_bdevs_discovered": 2, 00:07:06.870 "num_base_bdevs_operational": 3, 00:07:06.870 "base_bdevs_list": [ 00:07:06.870 { 00:07:06.870 "name": "BaseBdev1", 00:07:06.870 "uuid": "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792", 00:07:06.870 "is_configured": true, 00:07:06.870 "data_offset": 0, 00:07:06.870 "data_size": 65536 00:07:06.870 }, 00:07:06.870 { 00:07:06.870 "name": "BaseBdev2", 00:07:06.870 "uuid": "9cb867fb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:06.870 "is_configured": true, 00:07:06.870 "data_offset": 0, 00:07:06.870 "data_size": 65536 00:07:06.870 }, 00:07:06.870 { 00:07:06.870 "name": "BaseBdev3", 00:07:06.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.870 "is_configured": false, 00:07:06.870 "data_offset": 0, 00:07:06.870 "data_size": 0 00:07:06.870 } 00:07:06.870 ] 00:07:06.870 }' 00:07:06.870 13:30:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:06.870 13:30:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.129 13:30:46 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:07.389 [2024-07-10 13:30:46.611573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:07.389 [2024-07-10 13:30:46.611600] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bebba00 00:07:07.389 [2024-07-10 13:30:46.611604] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:07.389 [2024-07-10 13:30:46.611621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bf1eec0 00:07:07.389 [2024-07-10 13:30:46.611729] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bebba00 00:07:07.389 [2024-07-10 13:30:46.611732] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bebba00 00:07:07.389 [2024-07-10 13:30:46.611757] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.389 BaseBdev3 00:07:07.389 13:30:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:07.389 13:30:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:07.389 13:30:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:07.389 13:30:46 -- common/autotest_common.sh@889 -- # local i 00:07:07.389 13:30:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:07.389 13:30:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:07.389 13:30:46 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:07.650 13:30:46 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:07.650 [ 00:07:07.650 { 00:07:07.650 "name": "BaseBdev3", 00:07:07.650 "aliases": [ 00:07:07.650 "9d5694eb-3ec0-11ef-b9c4-5b09e08d4792" 00:07:07.650 ], 00:07:07.650 "product_name": "Malloc disk", 00:07:07.650 "block_size": 512, 00:07:07.650 "num_blocks": 65536, 00:07:07.650 "uuid": "9d5694eb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:07.650 "assigned_rate_limits": { 00:07:07.650 "rw_ios_per_sec": 0, 00:07:07.650 "rw_mbytes_per_sec": 0, 00:07:07.650 "r_mbytes_per_sec": 0, 00:07:07.650 "w_mbytes_per_sec": 0 00:07:07.650 }, 00:07:07.650 "claimed": true, 00:07:07.650 "claim_type": "exclusive_write", 00:07:07.650 "zoned": false, 00:07:07.650 "supported_io_types": { 00:07:07.650 "read": true, 00:07:07.650 "write": true, 00:07:07.650 "unmap": true, 00:07:07.650 "write_zeroes": true, 00:07:07.650 "flush": true, 00:07:07.650 "reset": true, 00:07:07.650 "compare": false, 00:07:07.650 "compare_and_write": false, 00:07:07.650 "abort": true, 00:07:07.650 "nvme_admin": false, 00:07:07.650 "nvme_io": false 00:07:07.650 }, 00:07:07.650 "memory_domains": [ 00:07:07.650 { 00:07:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.650 "dma_device_type": 2 00:07:07.650 } 00:07:07.650 ], 00:07:07.650 "driver_specific": {} 00:07:07.650 } 00:07:07.650 ] 00:07:07.650 13:30:46 -- common/autotest_common.sh@895 -- # return 0 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.650 13:30:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.909 13:30:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:07.909 "name": "Existed_Raid", 00:07:07.909 "uuid": "9d5699f4-3ec0-11ef-b9c4-5b09e08d4792", 00:07:07.909 "strip_size_kb": 64, 00:07:07.909 "state": "online", 00:07:07.909 "raid_level": "raid0", 00:07:07.909 "superblock": false, 00:07:07.909 "num_base_bdevs": 3, 00:07:07.909 "num_base_bdevs_discovered": 3, 00:07:07.909 "num_base_bdevs_operational": 3, 00:07:07.909 "base_bdevs_list": [ 00:07:07.909 { 00:07:07.909 "name": "BaseBdev1", 00:07:07.909 "uuid": "9b9b33a9-3ec0-11ef-b9c4-5b09e08d4792", 00:07:07.909 "is_configured": true, 00:07:07.909 "data_offset": 0, 00:07:07.909 "data_size": 65536 00:07:07.909 }, 00:07:07.909 { 00:07:07.909 "name": "BaseBdev2", 00:07:07.909 "uuid": "9cb867fb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:07.909 "is_configured": true, 00:07:07.909 "data_offset": 0, 00:07:07.909 "data_size": 65536 00:07:07.909 }, 00:07:07.909 { 00:07:07.909 "name": "BaseBdev3", 00:07:07.909 "uuid": "9d5694eb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:07.909 "is_configured": true, 00:07:07.909 "data_offset": 0, 00:07:07.909 "data_size": 65536 00:07:07.909 } 00:07:07.909 ] 00:07:07.909 }' 00:07:07.909 13:30:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:07.909 13:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:08.168 13:30:47 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:08.427 [2024-07-10 13:30:47.620059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:08.428 [2024-07-10 13:30:47.620087] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.428 [2024-07-10 13:30:47.620099] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.428 13:30:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.686 13:30:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:08.686 "name": "Existed_Raid", 00:07:08.686 "uuid": "9d5699f4-3ec0-11ef-b9c4-5b09e08d4792", 00:07:08.686 "strip_size_kb": 64, 00:07:08.686 "state": "offline", 00:07:08.686 "raid_level": "raid0", 00:07:08.686 "superblock": false, 00:07:08.686 "num_base_bdevs": 3, 00:07:08.686 "num_base_bdevs_discovered": 2, 00:07:08.686 "num_base_bdevs_operational": 2, 00:07:08.686 "base_bdevs_list": [ 00:07:08.686 { 00:07:08.686 "name": null, 00:07:08.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.686 "is_configured": false, 00:07:08.686 "data_offset": 0, 00:07:08.686 "data_size": 65536 00:07:08.686 }, 00:07:08.686 { 00:07:08.686 "name": "BaseBdev2", 00:07:08.687 "uuid": "9cb867fb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:08.687 "is_configured": true, 00:07:08.687 "data_offset": 0, 00:07:08.687 "data_size": 65536 00:07:08.687 }, 00:07:08.687 { 00:07:08.687 "name": "BaseBdev3", 00:07:08.687 "uuid": "9d5694eb-3ec0-11ef-b9c4-5b09e08d4792", 00:07:08.687 "is_configured": true, 00:07:08.687 "data_offset": 0, 00:07:08.687 "data_size": 65536 00:07:08.687 } 00:07:08.687 ] 00:07:08.687 }' 00:07:08.687 13:30:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:08.687 13:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.946 13:30:48 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:09.205 [2024-07-10 13:30:48.457304] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:09.205 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:09.205 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:09.205 13:30:48 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.205 13:30:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:09.466 13:30:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:09.466 13:30:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:09.466 13:30:48 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:09.725 [2024-07-10 13:30:48.838206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:09.725 [2024-07-10 13:30:48.838236] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bebba00 name Existed_Raid, state offline 00:07:09.725 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:09.725 13:30:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:09.725 13:30:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.725 13:30:48 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.725 13:30:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:09.725 13:30:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:09.725 13:30:49 -- bdev/bdev_raid.sh@287 -- # killprocess 49279 00:07:09.725 13:30:49 -- common/autotest_common.sh@926 -- # '[' -z 49279 ']' 00:07:09.725 13:30:49 -- common/autotest_common.sh@930 -- # kill -0 49279 00:07:09.725 13:30:49 -- common/autotest_common.sh@931 -- # uname 00:07:09.725 13:30:49 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:09.725 13:30:49 -- common/autotest_common.sh@934 -- # ps -c -o command 49279 00:07:09.725 13:30:49 -- common/autotest_common.sh@934 -- # tail -1 00:07:09.725 13:30:49 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:09.725 13:30:49 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:09.725 killing process with pid 49279 00:07:09.725 13:30:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49279' 00:07:09.725 13:30:49 -- common/autotest_common.sh@945 -- # kill 49279 00:07:09.725 [2024-07-10 13:30:49.060728] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.725 [2024-07-10 13:30:49.060774] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.725 13:30:49 -- common/autotest_common.sh@950 -- # wait 49279 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:09.985 00:07:09.985 real 0m7.617s 00:07:09.985 user 0m12.965s 00:07:09.985 sys 0m1.585s 00:07:09.985 13:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.985 13:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:09.985 ************************************ 00:07:09.985 END TEST raid_state_function_test 00:07:09.985 ************************************ 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:09.985 13:30:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:09.985 13:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.985 13:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:09.985 ************************************ 00:07:09.985 START TEST raid_state_function_test_sb 00:07:09.985 ************************************ 00:07:09.985 13:30:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=49512 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49512' 00:07:09.985 Process raid pid: 49512 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:09.985 13:30:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49512 /var/tmp/spdk-raid.sock 00:07:09.985 13:30:49 -- common/autotest_common.sh@819 -- # '[' -z 49512 ']' 00:07:09.985 13:30:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:09.985 13:30:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.985 13:30:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:09.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:09.985 13:30:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.985 13:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:09.985 [2024-07-10 13:30:49.284457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.985 [2024-07-10 13:30:49.284801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:10.553 EAL: TSC is not safe to use in SMP mode 00:07:10.553 EAL: TSC is not invariant 00:07:10.553 [2024-07-10 13:30:49.718415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.553 [2024-07-10 13:30:49.806799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.553 [2024-07-10 13:30:49.807210] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.553 [2024-07-10 13:30:49.807219] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.121 13:30:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.121 13:30:50 -- common/autotest_common.sh@852 -- # return 0 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:11.121 [2024-07-10 13:30:50.326535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.121 [2024-07-10 13:30:50.326604] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.121 [2024-07-10 13:30:50.326608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.121 [2024-07-10 13:30:50.326614] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.121 [2024-07-10 13:30:50.326617] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:11.121 [2024-07-10 13:30:50.326623] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.121 13:30:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.380 13:30:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:11.380 "name": "Existed_Raid", 00:07:11.380 "uuid": "9f8d741a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:11.380 "strip_size_kb": 64, 00:07:11.380 "state": "configuring", 00:07:11.380 "raid_level": "raid0", 00:07:11.380 "superblock": true, 00:07:11.380 "num_base_bdevs": 3, 00:07:11.380 "num_base_bdevs_discovered": 0, 00:07:11.380 "num_base_bdevs_operational": 3, 00:07:11.380 "base_bdevs_list": [ 00:07:11.380 { 00:07:11.380 "name": "BaseBdev1", 00:07:11.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.380 "is_configured": false, 00:07:11.380 "data_offset": 0, 00:07:11.380 "data_size": 0 00:07:11.380 }, 00:07:11.380 { 00:07:11.380 "name": "BaseBdev2", 00:07:11.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.380 "is_configured": false, 00:07:11.380 "data_offset": 0, 00:07:11.380 "data_size": 0 00:07:11.380 }, 00:07:11.380 { 00:07:11.380 "name": "BaseBdev3", 00:07:11.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.380 "is_configured": false, 00:07:11.380 "data_offset": 0, 00:07:11.380 "data_size": 0 00:07:11.380 } 00:07:11.380 ] 00:07:11.380 }' 00:07:11.380 13:30:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:11.380 13:30:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.639 13:30:50 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:11.639 [2024-07-10 13:30:50.970852] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.639 [2024-07-10 13:30:50.970875] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8d1500 name Existed_Raid, state configuring 00:07:11.639 13:30:50 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:11.898 [2024-07-10 13:30:51.154953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.898 [2024-07-10 13:30:51.154987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.898 [2024-07-10 13:30:51.154990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.898 [2024-07-10 13:30:51.154995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.898 [2024-07-10 13:30:51.154998] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:11.898 [2024-07-10 13:30:51.155003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:11.898 13:30:51 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.156 [2024-07-10 13:30:51.323790] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.156 BaseBdev1 00:07:12.156 13:30:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:12.156 13:30:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:12.156 13:30:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:12.156 13:30:51 -- common/autotest_common.sh@889 -- # local i 00:07:12.156 13:30:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:12.156 13:30:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:12.156 13:30:51 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:12.156 13:30:51 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.417 [ 00:07:12.417 { 00:07:12.417 "name": "BaseBdev1", 00:07:12.417 "aliases": [ 00:07:12.418 "a025823b-3ec0-11ef-b9c4-5b09e08d4792" 00:07:12.418 ], 00:07:12.418 "product_name": "Malloc disk", 00:07:12.418 "block_size": 512, 00:07:12.418 "num_blocks": 65536, 00:07:12.418 "uuid": "a025823b-3ec0-11ef-b9c4-5b09e08d4792", 00:07:12.418 "assigned_rate_limits": { 00:07:12.418 "rw_ios_per_sec": 0, 00:07:12.418 "rw_mbytes_per_sec": 0, 00:07:12.418 "r_mbytes_per_sec": 0, 00:07:12.418 "w_mbytes_per_sec": 0 00:07:12.418 }, 00:07:12.418 "claimed": true, 00:07:12.418 "claim_type": "exclusive_write", 00:07:12.418 "zoned": false, 00:07:12.418 "supported_io_types": { 00:07:12.418 "read": true, 00:07:12.418 "write": true, 00:07:12.418 "unmap": true, 00:07:12.418 "write_zeroes": true, 00:07:12.418 "flush": true, 00:07:12.418 "reset": true, 00:07:12.418 "compare": false, 00:07:12.418 "compare_and_write": false, 00:07:12.418 "abort": true, 00:07:12.418 "nvme_admin": false, 00:07:12.418 "nvme_io": false 00:07:12.418 }, 00:07:12.418 "memory_domains": [ 00:07:12.418 { 00:07:12.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.418 "dma_device_type": 2 00:07:12.418 } 00:07:12.418 ], 00:07:12.418 "driver_specific": {} 00:07:12.418 } 00:07:12.418 ] 00:07:12.418 13:30:51 -- common/autotest_common.sh@895 -- # return 0 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.418 13:30:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.678 13:30:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:12.678 "name": "Existed_Raid", 00:07:12.678 "uuid": "a00bdc4f-3ec0-11ef-b9c4-5b09e08d4792", 00:07:12.678 "strip_size_kb": 64, 00:07:12.678 "state": "configuring", 00:07:12.678 "raid_level": "raid0", 00:07:12.678 "superblock": true, 00:07:12.678 "num_base_bdevs": 3, 00:07:12.678 "num_base_bdevs_discovered": 1, 00:07:12.678 "num_base_bdevs_operational": 3, 00:07:12.678 "base_bdevs_list": [ 00:07:12.678 { 00:07:12.678 "name": "BaseBdev1", 00:07:12.678 "uuid": "a025823b-3ec0-11ef-b9c4-5b09e08d4792", 00:07:12.679 "is_configured": true, 00:07:12.679 "data_offset": 2048, 00:07:12.679 "data_size": 63488 00:07:12.679 }, 00:07:12.679 { 00:07:12.679 "name": "BaseBdev2", 00:07:12.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.679 "is_configured": false, 00:07:12.679 "data_offset": 0, 00:07:12.679 "data_size": 0 00:07:12.679 }, 00:07:12.679 { 00:07:12.679 "name": "BaseBdev3", 00:07:12.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.679 "is_configured": false, 00:07:12.679 "data_offset": 0, 00:07:12.679 "data_size": 0 00:07:12.679 } 00:07:12.679 ] 00:07:12.679 }' 00:07:12.679 13:30:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:12.679 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.937 13:30:52 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:13.196 [2024-07-10 13:30:52.343576] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.196 [2024-07-10 13:30:52.343602] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8d1500 name Existed_Raid, state configuring 00:07:13.196 13:30:52 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:13.196 13:30:52 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:13.196 13:30:52 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.454 BaseBdev1 00:07:13.454 13:30:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:13.454 13:30:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:13.454 13:30:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:13.454 13:30:52 -- common/autotest_common.sh@889 -- # local i 00:07:13.454 13:30:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:13.454 13:30:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:13.454 13:30:52 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:13.712 13:30:52 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.970 [ 00:07:13.970 { 00:07:13.970 "name": "BaseBdev1", 00:07:13.970 "aliases": [ 00:07:13.970 "a0fb554a-3ec0-11ef-b9c4-5b09e08d4792" 00:07:13.970 ], 00:07:13.970 "product_name": "Malloc disk", 00:07:13.970 "block_size": 512, 00:07:13.970 "num_blocks": 65536, 00:07:13.970 "uuid": "a0fb554a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:13.970 "assigned_rate_limits": { 00:07:13.970 "rw_ios_per_sec": 0, 00:07:13.970 "rw_mbytes_per_sec": 0, 00:07:13.970 "r_mbytes_per_sec": 0, 00:07:13.970 "w_mbytes_per_sec": 0 00:07:13.970 }, 00:07:13.970 "claimed": false, 00:07:13.970 "zoned": false, 00:07:13.970 "supported_io_types": { 00:07:13.970 "read": true, 00:07:13.970 "write": true, 00:07:13.970 "unmap": true, 00:07:13.970 "write_zeroes": true, 00:07:13.970 "flush": true, 00:07:13.970 "reset": true, 00:07:13.970 "compare": false, 00:07:13.970 "compare_and_write": false, 00:07:13.970 "abort": true, 00:07:13.970 "nvme_admin": false, 00:07:13.970 "nvme_io": false 00:07:13.970 }, 00:07:13.970 "memory_domains": [ 00:07:13.970 { 00:07:13.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.970 "dma_device_type": 2 00:07:13.970 } 00:07:13.970 ], 00:07:13.970 "driver_specific": {} 00:07:13.970 } 00:07:13.970 ] 00:07:13.970 13:30:53 -- common/autotest_common.sh@895 -- # return 0 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:13.970 [2024-07-10 13:30:53.264640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.970 [2024-07-10 13:30:53.265046] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.970 [2024-07-10 13:30:53.265103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.970 [2024-07-10 13:30:53.265118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:13.970 [2024-07-10 13:30:53.265125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:13.970 13:30:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.229 13:30:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:14.229 "name": "Existed_Raid", 00:07:14.229 "uuid": "a14dc5e2-3ec0-11ef-b9c4-5b09e08d4792", 00:07:14.229 "strip_size_kb": 64, 00:07:14.229 "state": "configuring", 00:07:14.229 "raid_level": "raid0", 00:07:14.229 "superblock": true, 00:07:14.229 "num_base_bdevs": 3, 00:07:14.229 "num_base_bdevs_discovered": 1, 00:07:14.229 "num_base_bdevs_operational": 3, 00:07:14.229 "base_bdevs_list": [ 00:07:14.229 { 00:07:14.229 "name": "BaseBdev1", 00:07:14.229 "uuid": "a0fb554a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:14.229 "is_configured": true, 00:07:14.229 "data_offset": 2048, 00:07:14.229 "data_size": 63488 00:07:14.229 }, 00:07:14.229 { 00:07:14.229 "name": "BaseBdev2", 00:07:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.229 "is_configured": false, 00:07:14.229 "data_offset": 0, 00:07:14.229 "data_size": 0 00:07:14.229 }, 00:07:14.229 { 00:07:14.229 "name": "BaseBdev3", 00:07:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.229 "is_configured": false, 00:07:14.229 "data_offset": 0, 00:07:14.229 "data_size": 0 00:07:14.229 } 00:07:14.229 ] 00:07:14.229 }' 00:07:14.229 13:30:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:14.229 13:30:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.489 13:30:53 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.748 [2024-07-10 13:30:53.953066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.748 BaseBdev2 00:07:14.748 13:30:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:14.748 13:30:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:14.748 13:30:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:14.748 13:30:53 -- common/autotest_common.sh@889 -- # local i 00:07:14.748 13:30:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:14.748 13:30:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:14.748 13:30:53 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:15.006 13:30:54 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.006 [ 00:07:15.006 { 00:07:15.006 "name": "BaseBdev2", 00:07:15.006 "aliases": [ 00:07:15.006 "a1b6ce31-3ec0-11ef-b9c4-5b09e08d4792" 00:07:15.006 ], 00:07:15.006 "product_name": "Malloc disk", 00:07:15.006 "block_size": 512, 00:07:15.006 "num_blocks": 65536, 00:07:15.006 "uuid": "a1b6ce31-3ec0-11ef-b9c4-5b09e08d4792", 00:07:15.006 "assigned_rate_limits": { 00:07:15.006 "rw_ios_per_sec": 0, 00:07:15.006 "rw_mbytes_per_sec": 0, 00:07:15.006 "r_mbytes_per_sec": 0, 00:07:15.006 "w_mbytes_per_sec": 0 00:07:15.006 }, 00:07:15.006 "claimed": true, 00:07:15.006 "claim_type": "exclusive_write", 00:07:15.006 "zoned": false, 00:07:15.006 "supported_io_types": { 00:07:15.007 "read": true, 00:07:15.007 "write": true, 00:07:15.007 "unmap": true, 00:07:15.007 "write_zeroes": true, 00:07:15.007 "flush": true, 00:07:15.007 "reset": true, 00:07:15.007 "compare": false, 00:07:15.007 "compare_and_write": false, 00:07:15.007 "abort": true, 00:07:15.007 "nvme_admin": false, 00:07:15.007 "nvme_io": false 00:07:15.007 }, 00:07:15.007 "memory_domains": [ 00:07:15.007 { 00:07:15.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.007 "dma_device_type": 2 00:07:15.007 } 00:07:15.007 ], 00:07:15.007 "driver_specific": {} 00:07:15.007 } 00:07:15.007 ] 00:07:15.007 13:30:54 -- common/autotest_common.sh@895 -- # return 0 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.007 13:30:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.264 13:30:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:15.264 "name": "Existed_Raid", 00:07:15.264 "uuid": "a14dc5e2-3ec0-11ef-b9c4-5b09e08d4792", 00:07:15.265 "strip_size_kb": 64, 00:07:15.265 "state": "configuring", 00:07:15.265 "raid_level": "raid0", 00:07:15.265 "superblock": true, 00:07:15.265 "num_base_bdevs": 3, 00:07:15.265 "num_base_bdevs_discovered": 2, 00:07:15.265 "num_base_bdevs_operational": 3, 00:07:15.265 "base_bdevs_list": [ 00:07:15.265 { 00:07:15.265 "name": "BaseBdev1", 00:07:15.265 "uuid": "a0fb554a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:15.265 "is_configured": true, 00:07:15.265 "data_offset": 2048, 00:07:15.265 "data_size": 63488 00:07:15.265 }, 00:07:15.265 { 00:07:15.265 "name": "BaseBdev2", 00:07:15.265 "uuid": "a1b6ce31-3ec0-11ef-b9c4-5b09e08d4792", 00:07:15.265 "is_configured": true, 00:07:15.265 "data_offset": 2048, 00:07:15.265 "data_size": 63488 00:07:15.265 }, 00:07:15.265 { 00:07:15.265 "name": "BaseBdev3", 00:07:15.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.265 "is_configured": false, 00:07:15.265 "data_offset": 0, 00:07:15.265 "data_size": 0 00:07:15.265 } 00:07:15.265 ] 00:07:15.265 }' 00:07:15.265 13:30:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:15.265 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.523 13:30:54 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:15.780 [2024-07-10 13:30:54.985598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:15.780 [2024-07-10 13:30:54.985653] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c8d1a00 00:07:15.780 [2024-07-10 13:30:54.985657] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:15.780 [2024-07-10 13:30:54.985673] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c934ec0 00:07:15.780 [2024-07-10 13:30:54.985706] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c8d1a00 00:07:15.780 [2024-07-10 13:30:54.985709] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c8d1a00 00:07:15.780 [2024-07-10 13:30:54.985723] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.780 BaseBdev3 00:07:15.780 13:30:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:15.780 13:30:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:15.780 13:30:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:15.780 13:30:55 -- common/autotest_common.sh@889 -- # local i 00:07:15.780 13:30:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:15.780 13:30:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:15.780 13:30:55 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:16.039 13:30:55 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:16.039 [ 00:07:16.039 { 00:07:16.039 "name": "BaseBdev3", 00:07:16.039 "aliases": [ 00:07:16.039 "a2545c25-3ec0-11ef-b9c4-5b09e08d4792" 00:07:16.039 ], 00:07:16.039 "product_name": "Malloc disk", 00:07:16.039 "block_size": 512, 00:07:16.039 "num_blocks": 65536, 00:07:16.039 "uuid": "a2545c25-3ec0-11ef-b9c4-5b09e08d4792", 00:07:16.039 "assigned_rate_limits": { 00:07:16.039 "rw_ios_per_sec": 0, 00:07:16.039 "rw_mbytes_per_sec": 0, 00:07:16.039 "r_mbytes_per_sec": 0, 00:07:16.039 "w_mbytes_per_sec": 0 00:07:16.039 }, 00:07:16.039 "claimed": true, 00:07:16.039 "claim_type": "exclusive_write", 00:07:16.039 "zoned": false, 00:07:16.039 "supported_io_types": { 00:07:16.039 "read": true, 00:07:16.039 "write": true, 00:07:16.039 "unmap": true, 00:07:16.039 "write_zeroes": true, 00:07:16.039 "flush": true, 00:07:16.039 "reset": true, 00:07:16.039 "compare": false, 00:07:16.039 "compare_and_write": false, 00:07:16.039 "abort": true, 00:07:16.039 "nvme_admin": false, 00:07:16.039 "nvme_io": false 00:07:16.039 }, 00:07:16.039 "memory_domains": [ 00:07:16.039 { 00:07:16.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.039 "dma_device_type": 2 00:07:16.039 } 00:07:16.039 ], 00:07:16.039 "driver_specific": {} 00:07:16.039 } 00:07:16.039 ] 00:07:16.039 13:30:55 -- common/autotest_common.sh@895 -- # return 0 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.039 13:30:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.297 13:30:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:16.297 "name": "Existed_Raid", 00:07:16.297 "uuid": "a14dc5e2-3ec0-11ef-b9c4-5b09e08d4792", 00:07:16.297 "strip_size_kb": 64, 00:07:16.297 "state": "online", 00:07:16.297 "raid_level": "raid0", 00:07:16.297 "superblock": true, 00:07:16.297 "num_base_bdevs": 3, 00:07:16.297 "num_base_bdevs_discovered": 3, 00:07:16.297 "num_base_bdevs_operational": 3, 00:07:16.297 "base_bdevs_list": [ 00:07:16.297 { 00:07:16.297 "name": "BaseBdev1", 00:07:16.297 "uuid": "a0fb554a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:16.297 "is_configured": true, 00:07:16.297 "data_offset": 2048, 00:07:16.297 "data_size": 63488 00:07:16.297 }, 00:07:16.297 { 00:07:16.297 "name": "BaseBdev2", 00:07:16.297 "uuid": "a1b6ce31-3ec0-11ef-b9c4-5b09e08d4792", 00:07:16.297 "is_configured": true, 00:07:16.297 "data_offset": 2048, 00:07:16.297 "data_size": 63488 00:07:16.297 }, 00:07:16.297 { 00:07:16.297 "name": "BaseBdev3", 00:07:16.297 "uuid": "a2545c25-3ec0-11ef-b9c4-5b09e08d4792", 00:07:16.297 "is_configured": true, 00:07:16.297 "data_offset": 2048, 00:07:16.297 "data_size": 63488 00:07:16.297 } 00:07:16.297 ] 00:07:16.297 }' 00:07:16.297 13:30:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:16.297 13:30:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.555 13:30:55 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:16.813 [2024-07-10 13:30:56.026019] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.813 [2024-07-10 13:30:56.026037] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.813 [2024-07-10 13:30:56.026046] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.813 13:30:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.071 13:30:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:17.071 "name": "Existed_Raid", 00:07:17.071 "uuid": "a14dc5e2-3ec0-11ef-b9c4-5b09e08d4792", 00:07:17.071 "strip_size_kb": 64, 00:07:17.071 "state": "offline", 00:07:17.071 "raid_level": "raid0", 00:07:17.071 "superblock": true, 00:07:17.071 "num_base_bdevs": 3, 00:07:17.071 "num_base_bdevs_discovered": 2, 00:07:17.071 "num_base_bdevs_operational": 2, 00:07:17.071 "base_bdevs_list": [ 00:07:17.071 { 00:07:17.071 "name": null, 00:07:17.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.071 "is_configured": false, 00:07:17.071 "data_offset": 2048, 00:07:17.071 "data_size": 63488 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "name": "BaseBdev2", 00:07:17.071 "uuid": "a1b6ce31-3ec0-11ef-b9c4-5b09e08d4792", 00:07:17.071 "is_configured": true, 00:07:17.071 "data_offset": 2048, 00:07:17.071 "data_size": 63488 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "name": "BaseBdev3", 00:07:17.071 "uuid": "a2545c25-3ec0-11ef-b9c4-5b09e08d4792", 00:07:17.071 "is_configured": true, 00:07:17.071 "data_offset": 2048, 00:07:17.071 "data_size": 63488 00:07:17.071 } 00:07:17.071 ] 00:07:17.071 }' 00:07:17.071 13:30:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:17.071 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.367 13:30:56 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:17.632 [2024-07-10 13:30:56.883068] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.632 13:30:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:17.632 13:30:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:17.632 13:30:56 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.632 13:30:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:17.891 13:30:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:17.891 13:30:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.892 13:30:57 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:18.151 [2024-07-10 13:30:57.279890] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:18.151 [2024-07-10 13:30:57.279912] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8d1a00 name Existed_Raid, state offline 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:18.151 13:30:57 -- bdev/bdev_raid.sh@287 -- # killprocess 49512 00:07:18.151 13:30:57 -- common/autotest_common.sh@926 -- # '[' -z 49512 ']' 00:07:18.151 13:30:57 -- common/autotest_common.sh@930 -- # kill -0 49512 00:07:18.151 13:30:57 -- common/autotest_common.sh@931 -- # uname 00:07:18.151 13:30:57 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:18.151 13:30:57 -- common/autotest_common.sh@934 -- # tail -1 00:07:18.151 13:30:57 -- common/autotest_common.sh@934 -- # ps -c -o command 49512 00:07:18.151 13:30:57 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:18.151 13:30:57 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:18.151 killing process with pid 49512 00:07:18.151 13:30:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49512' 00:07:18.151 13:30:57 -- common/autotest_common.sh@945 -- # kill 49512 00:07:18.151 [2024-07-10 13:30:57.499036] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.151 [2024-07-10 13:30:57.499069] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.151 13:30:57 -- common/autotest_common.sh@950 -- # wait 49512 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:18.410 00:07:18.410 real 0m8.382s 00:07:18.410 user 0m14.507s 00:07:18.410 sys 0m1.567s 00:07:18.410 13:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.410 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 END TEST raid_state_function_test_sb 00:07:18.410 ************************************ 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:18.410 13:30:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:18.410 13:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.410 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 START TEST raid_superblock_test 00:07:18.410 ************************************ 00:07:18.410 13:30:57 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=49748 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49748 /var/tmp/spdk-raid.sock 00:07:18.410 13:30:57 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:18.410 13:30:57 -- common/autotest_common.sh@819 -- # '[' -z 49748 ']' 00:07:18.410 13:30:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:18.410 13:30:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:18.410 13:30:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:18.410 13:30:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.410 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 [2024-07-10 13:30:57.706034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.410 [2024-07-10 13:30:57.706326] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:18.977 EAL: TSC is not safe to use in SMP mode 00:07:18.977 EAL: TSC is not invariant 00:07:18.977 [2024-07-10 13:30:58.135289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.977 [2024-07-10 13:30:58.221811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.977 [2024-07-10 13:30:58.222270] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.978 [2024-07-10 13:30:58.222277] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.542 13:30:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.542 13:30:58 -- common/autotest_common.sh@852 -- # return 0 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:19.542 malloc1 00:07:19.542 13:30:58 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.801 [2024-07-10 13:30:58.985550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.801 [2024-07-10 13:30:58.985606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.801 [2024-07-10 13:30:58.986111] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdca780 00:07:19.801 [2024-07-10 13:30:58.986133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.801 [2024-07-10 13:30:58.986852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.801 [2024-07-10 13:30:58.986883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.801 pt1 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.801 13:30:59 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:20.059 malloc2 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.059 [2024-07-10 13:30:59.365713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.059 [2024-07-10 13:30:59.365755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.059 [2024-07-10 13:30:59.365778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcac80 00:07:20.059 [2024-07-10 13:30:59.365783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.059 [2024-07-10 13:30:59.366213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.059 [2024-07-10 13:30:59.366240] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.059 pt2 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.059 13:30:59 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:07:20.317 malloc3 00:07:20.317 13:30:59 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:20.575 [2024-07-10 13:30:59.753879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:20.575 [2024-07-10 13:30:59.753929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.575 [2024-07-10 13:30:59.753949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcb180 00:07:20.575 [2024-07-10 13:30:59.753971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.575 [2024-07-10 13:30:59.754387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.575 [2024-07-10 13:30:59.754430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:20.575 pt3 00:07:20.575 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:20.575 13:30:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:20.576 13:30:59 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:07:20.834 [2024-07-10 13:30:59.973996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.834 [2024-07-10 13:30:59.974386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.834 [2024-07-10 13:30:59.974403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:20.834 [2024-07-10 13:30:59.974449] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bdcb400 00:07:20.834 [2024-07-10 13:30:59.974454] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:20.834 [2024-07-10 13:30:59.974480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be2de20 00:07:20.834 [2024-07-10 13:30:59.974546] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bdcb400 00:07:20.834 [2024-07-10 13:30:59.974550] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bdcb400 00:07:20.834 [2024-07-10 13:30:59.974567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.834 13:30:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.834 13:31:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:20.834 "name": "raid_bdev1", 00:07:20.834 "uuid": "a54d8a08-3ec0-11ef-b9c4-5b09e08d4792", 00:07:20.834 "strip_size_kb": 64, 00:07:20.834 "state": "online", 00:07:20.834 "raid_level": "raid0", 00:07:20.834 "superblock": true, 00:07:20.834 "num_base_bdevs": 3, 00:07:20.834 "num_base_bdevs_discovered": 3, 00:07:20.834 "num_base_bdevs_operational": 3, 00:07:20.834 "base_bdevs_list": [ 00:07:20.834 { 00:07:20.834 "name": "pt1", 00:07:20.834 "uuid": "dbfb3543-44f4-5f57-9622-15011afee775", 00:07:20.834 "is_configured": true, 00:07:20.834 "data_offset": 2048, 00:07:20.834 "data_size": 63488 00:07:20.834 }, 00:07:20.834 { 00:07:20.834 "name": "pt2", 00:07:20.834 "uuid": "2e941b0b-0eb7-835c-82b6-d87bf05f5be3", 00:07:20.834 "is_configured": true, 00:07:20.834 "data_offset": 2048, 00:07:20.834 "data_size": 63488 00:07:20.834 }, 00:07:20.834 { 00:07:20.834 "name": "pt3", 00:07:20.834 "uuid": "5e938165-bbf5-f550-bab9-7aeedd4473e8", 00:07:20.834 "is_configured": true, 00:07:20.834 "data_offset": 2048, 00:07:20.834 "data_size": 63488 00:07:20.834 } 00:07:20.834 ] 00:07:20.834 }' 00:07:20.834 13:31:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:20.834 13:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.400 13:31:00 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:21.400 13:31:00 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:21.400 [2024-07-10 13:31:00.642297] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.400 13:31:00 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a54d8a08-3ec0-11ef-b9c4-5b09e08d4792 00:07:21.400 13:31:00 -- bdev/bdev_raid.sh@380 -- # '[' -z a54d8a08-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:21.400 13:31:00 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:21.659 [2024-07-10 13:31:00.830329] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.659 [2024-07-10 13:31:00.830350] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.659 [2024-07-10 13:31:00.830366] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.659 [2024-07-10 13:31:00.830379] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.659 [2024-07-10 13:31:00.830382] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdcb400 name raid_bdev1, state offline 00:07:21.659 13:31:00 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.659 13:31:00 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.982 13:31:01 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:22.240 13:31:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:22.240 13:31:01 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:07:22.499 13:31:01 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:22.499 13:31:01 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:22.499 13:31:01 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:22.499 13:31:01 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:22.499 13:31:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:22.499 13:31:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:22.499 13:31:01 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.499 13:31:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.499 13:31:01 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.499 13:31:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.499 13:31:01 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.499 13:31:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.499 13:31:01 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.499 13:31:01 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:22.499 13:31:01 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:22.758 [2024-07-10 13:31:02.034858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.758 [2024-07-10 13:31:02.035347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.758 [2024-07-10 13:31:02.035367] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:22.758 [2024-07-10 13:31:02.035386] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:22.758 [2024-07-10 13:31:02.035422] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:22.758 [2024-07-10 13:31:02.035431] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:07:22.758 [2024-07-10 13:31:02.035438] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.758 [2024-07-10 13:31:02.035442] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdcb180 name raid_bdev1, state configuring 00:07:22.758 request: 00:07:22.758 { 00:07:22.758 "name": "raid_bdev1", 00:07:22.758 "raid_level": "raid0", 00:07:22.758 "base_bdevs": [ 00:07:22.758 "malloc1", 00:07:22.758 "malloc2", 00:07:22.758 "malloc3" 00:07:22.758 ], 00:07:22.758 "superblock": false, 00:07:22.758 "strip_size_kb": 64, 00:07:22.758 "method": "bdev_raid_create", 00:07:22.758 "req_id": 1 00:07:22.758 } 00:07:22.758 Got JSON-RPC error response 00:07:22.758 response: 00:07:22.758 { 00:07:22.758 "code": -17, 00:07:22.758 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:22.758 } 00:07:22.758 13:31:02 -- common/autotest_common.sh@643 -- # es=1 00:07:22.758 13:31:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:22.758 13:31:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:22.758 13:31:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:22.758 13:31:02 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.758 13:31:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:23.017 13:31:02 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:23.017 13:31:02 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:23.017 13:31:02 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:23.275 [2024-07-10 13:31:02.419008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:23.275 [2024-07-10 13:31:02.419062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.275 [2024-07-10 13:31:02.419089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcac80 00:07:23.275 [2024-07-10 13:31:02.419096] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.275 [2024-07-10 13:31:02.419591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.275 [2024-07-10 13:31:02.419625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:23.275 [2024-07-10 13:31:02.419646] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:23.275 [2024-07-10 13:31:02.419657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:23.275 pt1 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.275 13:31:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.534 13:31:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:23.534 "name": "raid_bdev1", 00:07:23.534 "uuid": "a54d8a08-3ec0-11ef-b9c4-5b09e08d4792", 00:07:23.534 "strip_size_kb": 64, 00:07:23.534 "state": "configuring", 00:07:23.534 "raid_level": "raid0", 00:07:23.534 "superblock": true, 00:07:23.534 "num_base_bdevs": 3, 00:07:23.534 "num_base_bdevs_discovered": 1, 00:07:23.534 "num_base_bdevs_operational": 3, 00:07:23.534 "base_bdevs_list": [ 00:07:23.534 { 00:07:23.534 "name": "pt1", 00:07:23.534 "uuid": "dbfb3543-44f4-5f57-9622-15011afee775", 00:07:23.534 "is_configured": true, 00:07:23.534 "data_offset": 2048, 00:07:23.534 "data_size": 63488 00:07:23.534 }, 00:07:23.534 { 00:07:23.534 "name": null, 00:07:23.534 "uuid": "2e941b0b-0eb7-835c-82b6-d87bf05f5be3", 00:07:23.534 "is_configured": false, 00:07:23.534 "data_offset": 2048, 00:07:23.534 "data_size": 63488 00:07:23.534 }, 00:07:23.534 { 00:07:23.534 "name": null, 00:07:23.534 "uuid": "5e938165-bbf5-f550-bab9-7aeedd4473e8", 00:07:23.534 "is_configured": false, 00:07:23.534 "data_offset": 2048, 00:07:23.534 "data_size": 63488 00:07:23.534 } 00:07:23.534 ] 00:07:23.534 }' 00:07:23.534 13:31:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:23.534 13:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.797 13:31:02 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:07:23.797 13:31:02 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.797 [2024-07-10 13:31:03.147322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.797 [2024-07-10 13:31:03.147374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.797 [2024-07-10 13:31:03.147402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcb680 00:07:23.797 [2024-07-10 13:31:03.147409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.797 [2024-07-10 13:31:03.147508] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.797 [2024-07-10 13:31:03.147517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.797 [2024-07-10 13:31:03.147535] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:23.797 [2024-07-10 13:31:03.147542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.797 pt2 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:24.055 [2024-07-10 13:31:03.355433] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.055 13:31:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.313 13:31:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:24.313 "name": "raid_bdev1", 00:07:24.313 "uuid": "a54d8a08-3ec0-11ef-b9c4-5b09e08d4792", 00:07:24.313 "strip_size_kb": 64, 00:07:24.313 "state": "configuring", 00:07:24.313 "raid_level": "raid0", 00:07:24.313 "superblock": true, 00:07:24.313 "num_base_bdevs": 3, 00:07:24.313 "num_base_bdevs_discovered": 1, 00:07:24.313 "num_base_bdevs_operational": 3, 00:07:24.313 "base_bdevs_list": [ 00:07:24.313 { 00:07:24.313 "name": "pt1", 00:07:24.313 "uuid": "dbfb3543-44f4-5f57-9622-15011afee775", 00:07:24.313 "is_configured": true, 00:07:24.313 "data_offset": 2048, 00:07:24.313 "data_size": 63488 00:07:24.313 }, 00:07:24.313 { 00:07:24.313 "name": null, 00:07:24.313 "uuid": "2e941b0b-0eb7-835c-82b6-d87bf05f5be3", 00:07:24.313 "is_configured": false, 00:07:24.313 "data_offset": 2048, 00:07:24.313 "data_size": 63488 00:07:24.313 }, 00:07:24.313 { 00:07:24.313 "name": null, 00:07:24.313 "uuid": "5e938165-bbf5-f550-bab9-7aeedd4473e8", 00:07:24.313 "is_configured": false, 00:07:24.313 "data_offset": 2048, 00:07:24.313 "data_size": 63488 00:07:24.313 } 00:07:24.313 ] 00:07:24.313 }' 00:07:24.313 13:31:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:24.313 13:31:03 -- common/autotest_common.sh@10 -- # set +x 00:07:24.571 13:31:03 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:24.571 13:31:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:24.571 13:31:03 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.829 [2024-07-10 13:31:04.047713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.829 [2024-07-10 13:31:04.047786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.829 [2024-07-10 13:31:04.047813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcb680 00:07:24.829 [2024-07-10 13:31:04.047820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.829 [2024-07-10 13:31:04.047920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.829 [2024-07-10 13:31:04.047939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.829 [2024-07-10 13:31:04.047958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:24.829 [2024-07-10 13:31:04.047965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.829 pt2 00:07:24.829 13:31:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:24.829 13:31:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:24.829 13:31:04 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:25.087 [2024-07-10 13:31:04.251786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:25.087 [2024-07-10 13:31:04.251831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.087 [2024-07-10 13:31:04.251868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdcb400 00:07:25.087 [2024-07-10 13:31:04.251875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.087 [2024-07-10 13:31:04.251957] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.087 [2024-07-10 13:31:04.251970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:25.087 [2024-07-10 13:31:04.251988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:07:25.087 [2024-07-10 13:31:04.251997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:25.087 [2024-07-10 13:31:04.252022] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bdca780 00:07:25.087 [2024-07-10 13:31:04.252029] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:25.087 [2024-07-10 13:31:04.252054] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be2de20 00:07:25.087 [2024-07-10 13:31:04.252102] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bdca780 00:07:25.087 [2024-07-10 13:31:04.252109] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bdca780 00:07:25.087 [2024-07-10 13:31:04.252127] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.087 pt3 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.087 13:31:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.346 13:31:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:25.346 "name": "raid_bdev1", 00:07:25.346 "uuid": "a54d8a08-3ec0-11ef-b9c4-5b09e08d4792", 00:07:25.346 "strip_size_kb": 64, 00:07:25.346 "state": "online", 00:07:25.346 "raid_level": "raid0", 00:07:25.346 "superblock": true, 00:07:25.346 "num_base_bdevs": 3, 00:07:25.346 "num_base_bdevs_discovered": 3, 00:07:25.346 "num_base_bdevs_operational": 3, 00:07:25.346 "base_bdevs_list": [ 00:07:25.346 { 00:07:25.346 "name": "pt1", 00:07:25.346 "uuid": "dbfb3543-44f4-5f57-9622-15011afee775", 00:07:25.346 "is_configured": true, 00:07:25.346 "data_offset": 2048, 00:07:25.346 "data_size": 63488 00:07:25.346 }, 00:07:25.346 { 00:07:25.346 "name": "pt2", 00:07:25.346 "uuid": "2e941b0b-0eb7-835c-82b6-d87bf05f5be3", 00:07:25.346 "is_configured": true, 00:07:25.346 "data_offset": 2048, 00:07:25.346 "data_size": 63488 00:07:25.346 }, 00:07:25.346 { 00:07:25.346 "name": "pt3", 00:07:25.346 "uuid": "5e938165-bbf5-f550-bab9-7aeedd4473e8", 00:07:25.346 "is_configured": true, 00:07:25.346 "data_offset": 2048, 00:07:25.346 "data_size": 63488 00:07:25.346 } 00:07:25.346 ] 00:07:25.346 }' 00:07:25.346 13:31:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:25.346 13:31:04 -- common/autotest_common.sh@10 -- # set +x 00:07:25.604 13:31:04 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:25.604 13:31:04 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:25.863 [2024-07-10 13:31:04.968097] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.863 13:31:04 -- bdev/bdev_raid.sh@430 -- # '[' a54d8a08-3ec0-11ef-b9c4-5b09e08d4792 '!=' a54d8a08-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:25.863 13:31:04 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:07:25.863 13:31:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:25.863 13:31:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:25.863 13:31:04 -- bdev/bdev_raid.sh@511 -- # killprocess 49748 00:07:25.863 13:31:04 -- common/autotest_common.sh@926 -- # '[' -z 49748 ']' 00:07:25.863 13:31:04 -- common/autotest_common.sh@930 -- # kill -0 49748 00:07:25.863 13:31:04 -- common/autotest_common.sh@931 -- # uname 00:07:25.863 13:31:04 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:25.863 13:31:04 -- common/autotest_common.sh@934 -- # ps -c -o command 49748 00:07:25.863 13:31:04 -- common/autotest_common.sh@934 -- # tail -1 00:07:25.863 13:31:05 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:25.863 13:31:05 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:25.863 killing process with pid 49748 00:07:25.863 13:31:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49748' 00:07:25.863 13:31:05 -- common/autotest_common.sh@945 -- # kill 49748 00:07:25.863 [2024-07-10 13:31:05.003639] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.863 [2024-07-10 13:31:05.003675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.863 [2024-07-10 13:31:05.003690] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.863 [2024-07-10 13:31:05.003694] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdca780 name raid_bdev1, state offline 00:07:25.863 13:31:05 -- common/autotest_common.sh@950 -- # wait 49748 00:07:25.863 [2024-07-10 13:31:05.018209] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:25.863 00:07:25.863 real 0m7.479s 00:07:25.863 user 0m12.777s 00:07:25.863 sys 0m1.470s 00:07:25.863 13:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.863 13:31:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.863 ************************************ 00:07:25.863 END TEST raid_superblock_test 00:07:25.863 ************************************ 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:25.863 13:31:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:25.863 13:31:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.863 13:31:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.863 ************************************ 00:07:25.863 START TEST raid_state_function_test 00:07:25.863 ************************************ 00:07:25.863 13:31:05 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:25.863 13:31:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=49929 00:07:26.121 Process raid pid: 49929 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49929' 00:07:26.121 13:31:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49929 /var/tmp/spdk-raid.sock 00:07:26.121 13:31:05 -- common/autotest_common.sh@819 -- # '[' -z 49929 ']' 00:07:26.121 13:31:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:26.121 13:31:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:26.121 13:31:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:26.121 13:31:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.121 13:31:05 -- common/autotest_common.sh@10 -- # set +x 00:07:26.121 [2024-07-10 13:31:05.230748] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.121 [2024-07-10 13:31:05.230912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:26.380 EAL: TSC is not safe to use in SMP mode 00:07:26.380 EAL: TSC is not invariant 00:07:26.380 [2024-07-10 13:31:05.717087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.652 [2024-07-10 13:31:05.810263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.652 [2024-07-10 13:31:05.810783] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.652 [2024-07-10 13:31:05.810808] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.910 13:31:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:26.910 13:31:06 -- common/autotest_common.sh@852 -- # return 0 00:07:26.910 13:31:06 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:27.169 [2024-07-10 13:31:06.338234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.169 [2024-07-10 13:31:06.338287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.169 [2024-07-10 13:31:06.338292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.169 [2024-07-10 13:31:06.338299] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.169 [2024-07-10 13:31:06.338302] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.169 [2024-07-10 13:31:06.338309] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.170 13:31:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.429 13:31:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:27.429 "name": "Existed_Raid", 00:07:27.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.429 "strip_size_kb": 64, 00:07:27.429 "state": "configuring", 00:07:27.429 "raid_level": "concat", 00:07:27.429 "superblock": false, 00:07:27.429 "num_base_bdevs": 3, 00:07:27.429 "num_base_bdevs_discovered": 0, 00:07:27.429 "num_base_bdevs_operational": 3, 00:07:27.429 "base_bdevs_list": [ 00:07:27.429 { 00:07:27.429 "name": "BaseBdev1", 00:07:27.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.429 "is_configured": false, 00:07:27.429 "data_offset": 0, 00:07:27.429 "data_size": 0 00:07:27.429 }, 00:07:27.429 { 00:07:27.429 "name": "BaseBdev2", 00:07:27.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.429 "is_configured": false, 00:07:27.429 "data_offset": 0, 00:07:27.429 "data_size": 0 00:07:27.429 }, 00:07:27.429 { 00:07:27.429 "name": "BaseBdev3", 00:07:27.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.429 "is_configured": false, 00:07:27.429 "data_offset": 0, 00:07:27.429 "data_size": 0 00:07:27.429 } 00:07:27.429 ] 00:07:27.429 }' 00:07:27.429 13:31:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:27.429 13:31:06 -- common/autotest_common.sh@10 -- # set +x 00:07:27.687 13:31:06 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:27.946 [2024-07-10 13:31:07.054501] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.946 [2024-07-10 13:31:07.054526] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b5b500 name Existed_Raid, state configuring 00:07:27.946 13:31:07 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:27.946 [2024-07-10 13:31:07.266585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.946 [2024-07-10 13:31:07.266634] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.946 [2024-07-10 13:31:07.266638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.946 [2024-07-10 13:31:07.266646] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.946 [2024-07-10 13:31:07.266650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.946 [2024-07-10 13:31:07.266656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.946 13:31:07 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.205 [2024-07-10 13:31:07.475572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.205 BaseBdev1 00:07:28.205 13:31:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:28.205 13:31:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:28.205 13:31:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:28.205 13:31:07 -- common/autotest_common.sh@889 -- # local i 00:07:28.205 13:31:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:28.205 13:31:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:28.205 13:31:07 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:28.463 13:31:07 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.723 [ 00:07:28.723 { 00:07:28.723 "name": "BaseBdev1", 00:07:28.723 "aliases": [ 00:07:28.723 "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792" 00:07:28.723 ], 00:07:28.723 "product_name": "Malloc disk", 00:07:28.723 "block_size": 512, 00:07:28.723 "num_blocks": 65536, 00:07:28.723 "uuid": "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792", 00:07:28.723 "assigned_rate_limits": { 00:07:28.723 "rw_ios_per_sec": 0, 00:07:28.723 "rw_mbytes_per_sec": 0, 00:07:28.723 "r_mbytes_per_sec": 0, 00:07:28.723 "w_mbytes_per_sec": 0 00:07:28.723 }, 00:07:28.723 "claimed": true, 00:07:28.723 "claim_type": "exclusive_write", 00:07:28.723 "zoned": false, 00:07:28.723 "supported_io_types": { 00:07:28.723 "read": true, 00:07:28.723 "write": true, 00:07:28.723 "unmap": true, 00:07:28.723 "write_zeroes": true, 00:07:28.723 "flush": true, 00:07:28.723 "reset": true, 00:07:28.723 "compare": false, 00:07:28.723 "compare_and_write": false, 00:07:28.723 "abort": true, 00:07:28.723 "nvme_admin": false, 00:07:28.723 "nvme_io": false 00:07:28.723 }, 00:07:28.723 "memory_domains": [ 00:07:28.723 { 00:07:28.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.723 "dma_device_type": 2 00:07:28.723 } 00:07:28.723 ], 00:07:28.723 "driver_specific": {} 00:07:28.723 } 00:07:28.723 ] 00:07:28.723 13:31:07 -- common/autotest_common.sh@895 -- # return 0 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.723 13:31:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.982 13:31:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:28.982 "name": "Existed_Raid", 00:07:28.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.982 "strip_size_kb": 64, 00:07:28.982 "state": "configuring", 00:07:28.982 "raid_level": "concat", 00:07:28.982 "superblock": false, 00:07:28.982 "num_base_bdevs": 3, 00:07:28.982 "num_base_bdevs_discovered": 1, 00:07:28.982 "num_base_bdevs_operational": 3, 00:07:28.982 "base_bdevs_list": [ 00:07:28.982 { 00:07:28.982 "name": "BaseBdev1", 00:07:28.982 "uuid": "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792", 00:07:28.982 "is_configured": true, 00:07:28.982 "data_offset": 0, 00:07:28.982 "data_size": 65536 00:07:28.982 }, 00:07:28.982 { 00:07:28.982 "name": "BaseBdev2", 00:07:28.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.982 "is_configured": false, 00:07:28.982 "data_offset": 0, 00:07:28.982 "data_size": 0 00:07:28.982 }, 00:07:28.982 { 00:07:28.982 "name": "BaseBdev3", 00:07:28.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.982 "is_configured": false, 00:07:28.982 "data_offset": 0, 00:07:28.982 "data_size": 0 00:07:28.982 } 00:07:28.982 ] 00:07:28.982 }' 00:07:28.982 13:31:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:28.982 13:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:29.242 13:31:08 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:29.501 [2024-07-10 13:31:08.643106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.501 [2024-07-10 13:31:08.643138] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b5b500 name Existed_Raid, state configuring 00:07:29.501 13:31:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:29.501 13:31:08 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:29.501 [2024-07-10 13:31:08.859197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.501 [2024-07-10 13:31:08.859860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.501 [2024-07-10 13:31:08.859904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.501 [2024-07-10 13:31:08.859910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:29.501 [2024-07-10 13:31:08.859918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.760 13:31:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.760 13:31:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:29.760 "name": "Existed_Raid", 00:07:29.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.760 "strip_size_kb": 64, 00:07:29.760 "state": "configuring", 00:07:29.760 "raid_level": "concat", 00:07:29.760 "superblock": false, 00:07:29.760 "num_base_bdevs": 3, 00:07:29.760 "num_base_bdevs_discovered": 1, 00:07:29.760 "num_base_bdevs_operational": 3, 00:07:29.760 "base_bdevs_list": [ 00:07:29.760 { 00:07:29.760 "name": "BaseBdev1", 00:07:29.760 "uuid": "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792", 00:07:29.760 "is_configured": true, 00:07:29.760 "data_offset": 0, 00:07:29.760 "data_size": 65536 00:07:29.760 }, 00:07:29.760 { 00:07:29.760 "name": "BaseBdev2", 00:07:29.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.760 "is_configured": false, 00:07:29.760 "data_offset": 0, 00:07:29.760 "data_size": 0 00:07:29.760 }, 00:07:29.760 { 00:07:29.760 "name": "BaseBdev3", 00:07:29.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.760 "is_configured": false, 00:07:29.760 "data_offset": 0, 00:07:29.760 "data_size": 0 00:07:29.760 } 00:07:29.760 ] 00:07:29.760 }' 00:07:29.760 13:31:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:29.760 13:31:09 -- common/autotest_common.sh@10 -- # set +x 00:07:30.327 13:31:09 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.327 [2024-07-10 13:31:09.583545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.327 BaseBdev2 00:07:30.327 13:31:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:30.327 13:31:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:30.327 13:31:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:30.327 13:31:09 -- common/autotest_common.sh@889 -- # local i 00:07:30.327 13:31:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:30.327 13:31:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:30.327 13:31:09 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:30.585 13:31:09 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.844 [ 00:07:30.844 { 00:07:30.844 "name": "BaseBdev2", 00:07:30.844 "aliases": [ 00:07:30.844 "ab07d31a-3ec0-11ef-b9c4-5b09e08d4792" 00:07:30.844 ], 00:07:30.844 "product_name": "Malloc disk", 00:07:30.844 "block_size": 512, 00:07:30.844 "num_blocks": 65536, 00:07:30.844 "uuid": "ab07d31a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:30.844 "assigned_rate_limits": { 00:07:30.844 "rw_ios_per_sec": 0, 00:07:30.844 "rw_mbytes_per_sec": 0, 00:07:30.844 "r_mbytes_per_sec": 0, 00:07:30.844 "w_mbytes_per_sec": 0 00:07:30.844 }, 00:07:30.844 "claimed": true, 00:07:30.844 "claim_type": "exclusive_write", 00:07:30.844 "zoned": false, 00:07:30.844 "supported_io_types": { 00:07:30.844 "read": true, 00:07:30.844 "write": true, 00:07:30.844 "unmap": true, 00:07:30.844 "write_zeroes": true, 00:07:30.844 "flush": true, 00:07:30.844 "reset": true, 00:07:30.844 "compare": false, 00:07:30.844 "compare_and_write": false, 00:07:30.844 "abort": true, 00:07:30.844 "nvme_admin": false, 00:07:30.844 "nvme_io": false 00:07:30.844 }, 00:07:30.844 "memory_domains": [ 00:07:30.844 { 00:07:30.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.844 "dma_device_type": 2 00:07:30.844 } 00:07:30.844 ], 00:07:30.844 "driver_specific": {} 00:07:30.844 } 00:07:30.844 ] 00:07:30.844 13:31:10 -- common/autotest_common.sh@895 -- # return 0 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.844 13:31:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.102 13:31:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:31.102 "name": "Existed_Raid", 00:07:31.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.102 "strip_size_kb": 64, 00:07:31.102 "state": "configuring", 00:07:31.102 "raid_level": "concat", 00:07:31.102 "superblock": false, 00:07:31.102 "num_base_bdevs": 3, 00:07:31.102 "num_base_bdevs_discovered": 2, 00:07:31.102 "num_base_bdevs_operational": 3, 00:07:31.102 "base_bdevs_list": [ 00:07:31.102 { 00:07:31.102 "name": "BaseBdev1", 00:07:31.102 "uuid": "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792", 00:07:31.102 "is_configured": true, 00:07:31.102 "data_offset": 0, 00:07:31.102 "data_size": 65536 00:07:31.102 }, 00:07:31.102 { 00:07:31.102 "name": "BaseBdev2", 00:07:31.102 "uuid": "ab07d31a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:31.102 "is_configured": true, 00:07:31.102 "data_offset": 0, 00:07:31.102 "data_size": 65536 00:07:31.102 }, 00:07:31.102 { 00:07:31.102 "name": "BaseBdev3", 00:07:31.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.102 "is_configured": false, 00:07:31.102 "data_offset": 0, 00:07:31.102 "data_size": 0 00:07:31.102 } 00:07:31.102 ] 00:07:31.102 }' 00:07:31.102 13:31:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:31.102 13:31:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.361 13:31:10 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:31.361 [2024-07-10 13:31:10.695944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:31.361 [2024-07-10 13:31:10.695971] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829b5ba00 00:07:31.361 [2024-07-10 13:31:10.695975] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:31.361 [2024-07-10 13:31:10.695995] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829bbeec0 00:07:31.361 [2024-07-10 13:31:10.696082] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829b5ba00 00:07:31.361 [2024-07-10 13:31:10.696086] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829b5ba00 00:07:31.361 [2024-07-10 13:31:10.696114] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.361 BaseBdev3 00:07:31.361 13:31:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:31.361 13:31:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:31.361 13:31:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:31.361 13:31:10 -- common/autotest_common.sh@889 -- # local i 00:07:31.361 13:31:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:31.361 13:31:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:31.361 13:31:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:31.619 13:31:10 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:31.877 [ 00:07:31.877 { 00:07:31.877 "name": "BaseBdev3", 00:07:31.877 "aliases": [ 00:07:31.877 "abb1909d-3ec0-11ef-b9c4-5b09e08d4792" 00:07:31.877 ], 00:07:31.877 "product_name": "Malloc disk", 00:07:31.877 "block_size": 512, 00:07:31.877 "num_blocks": 65536, 00:07:31.877 "uuid": "abb1909d-3ec0-11ef-b9c4-5b09e08d4792", 00:07:31.877 "assigned_rate_limits": { 00:07:31.877 "rw_ios_per_sec": 0, 00:07:31.877 "rw_mbytes_per_sec": 0, 00:07:31.877 "r_mbytes_per_sec": 0, 00:07:31.877 "w_mbytes_per_sec": 0 00:07:31.877 }, 00:07:31.877 "claimed": true, 00:07:31.877 "claim_type": "exclusive_write", 00:07:31.877 "zoned": false, 00:07:31.877 "supported_io_types": { 00:07:31.877 "read": true, 00:07:31.877 "write": true, 00:07:31.877 "unmap": true, 00:07:31.877 "write_zeroes": true, 00:07:31.877 "flush": true, 00:07:31.877 "reset": true, 00:07:31.877 "compare": false, 00:07:31.877 "compare_and_write": false, 00:07:31.877 "abort": true, 00:07:31.877 "nvme_admin": false, 00:07:31.877 "nvme_io": false 00:07:31.877 }, 00:07:31.877 "memory_domains": [ 00:07:31.877 { 00:07:31.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.877 "dma_device_type": 2 00:07:31.877 } 00:07:31.877 ], 00:07:31.877 "driver_specific": {} 00:07:31.877 } 00:07:31.877 ] 00:07:31.877 13:31:11 -- common/autotest_common.sh@895 -- # return 0 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:31.877 13:31:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.878 13:31:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.143 13:31:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:32.143 "name": "Existed_Raid", 00:07:32.143 "uuid": "abb195bd-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.143 "strip_size_kb": 64, 00:07:32.143 "state": "online", 00:07:32.143 "raid_level": "concat", 00:07:32.143 "superblock": false, 00:07:32.143 "num_base_bdevs": 3, 00:07:32.143 "num_base_bdevs_discovered": 3, 00:07:32.143 "num_base_bdevs_operational": 3, 00:07:32.143 "base_bdevs_list": [ 00:07:32.143 { 00:07:32.143 "name": "BaseBdev1", 00:07:32.143 "uuid": "a9c60d1e-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.143 "is_configured": true, 00:07:32.143 "data_offset": 0, 00:07:32.143 "data_size": 65536 00:07:32.143 }, 00:07:32.143 { 00:07:32.143 "name": "BaseBdev2", 00:07:32.143 "uuid": "ab07d31a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.143 "is_configured": true, 00:07:32.143 "data_offset": 0, 00:07:32.143 "data_size": 65536 00:07:32.143 }, 00:07:32.143 { 00:07:32.143 "name": "BaseBdev3", 00:07:32.143 "uuid": "abb1909d-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.143 "is_configured": true, 00:07:32.143 "data_offset": 0, 00:07:32.143 "data_size": 65536 00:07:32.143 } 00:07:32.143 ] 00:07:32.143 }' 00:07:32.143 13:31:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:32.143 13:31:11 -- common/autotest_common.sh@10 -- # set +x 00:07:32.401 13:31:11 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:32.659 [2024-07-10 13:31:11.828244] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.659 [2024-07-10 13:31:11.828268] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.659 [2024-07-10 13:31:11.828279] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.659 13:31:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.915 13:31:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:32.915 "name": "Existed_Raid", 00:07:32.915 "uuid": "abb195bd-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.915 "strip_size_kb": 64, 00:07:32.915 "state": "offline", 00:07:32.915 "raid_level": "concat", 00:07:32.915 "superblock": false, 00:07:32.915 "num_base_bdevs": 3, 00:07:32.915 "num_base_bdevs_discovered": 2, 00:07:32.915 "num_base_bdevs_operational": 2, 00:07:32.915 "base_bdevs_list": [ 00:07:32.915 { 00:07:32.915 "name": null, 00:07:32.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.915 "is_configured": false, 00:07:32.915 "data_offset": 0, 00:07:32.915 "data_size": 65536 00:07:32.915 }, 00:07:32.915 { 00:07:32.915 "name": "BaseBdev2", 00:07:32.915 "uuid": "ab07d31a-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.915 "is_configured": true, 00:07:32.915 "data_offset": 0, 00:07:32.915 "data_size": 65536 00:07:32.915 }, 00:07:32.915 { 00:07:32.915 "name": "BaseBdev3", 00:07:32.915 "uuid": "abb1909d-3ec0-11ef-b9c4-5b09e08d4792", 00:07:32.915 "is_configured": true, 00:07:32.915 "data_offset": 0, 00:07:32.915 "data_size": 65536 00:07:32.915 } 00:07:32.915 ] 00:07:32.915 }' 00:07:32.915 13:31:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:32.915 13:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.173 13:31:12 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:33.430 [2024-07-10 13:31:12.705203] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:33.430 13:31:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:33.430 13:31:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:33.430 13:31:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:33.430 13:31:12 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.688 13:31:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:33.688 13:31:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.688 13:31:12 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:33.946 [2024-07-10 13:31:13.081997] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:33.946 [2024-07-10 13:31:13.082020] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b5ba00 name Existed_Raid, state offline 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:33.946 13:31:13 -- bdev/bdev_raid.sh@287 -- # killprocess 49929 00:07:33.946 13:31:13 -- common/autotest_common.sh@926 -- # '[' -z 49929 ']' 00:07:33.946 13:31:13 -- common/autotest_common.sh@930 -- # kill -0 49929 00:07:33.946 13:31:13 -- common/autotest_common.sh@931 -- # uname 00:07:33.946 13:31:13 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:33.946 13:31:13 -- common/autotest_common.sh@934 -- # ps -c -o command 49929 00:07:33.946 13:31:13 -- common/autotest_common.sh@934 -- # tail -1 00:07:33.946 13:31:13 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:33.946 13:31:13 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:33.946 killing process with pid 49929 00:07:33.946 13:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49929' 00:07:33.946 13:31:13 -- common/autotest_common.sh@945 -- # kill 49929 00:07:34.204 [2024-07-10 13:31:13.307258] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.204 [2024-07-10 13:31:13.307290] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.204 13:31:13 -- common/autotest_common.sh@950 -- # wait 49929 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:34.204 00:07:34.204 real 0m8.236s 00:07:34.204 user 0m14.240s 00:07:34.204 sys 0m1.604s 00:07:34.204 13:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.204 13:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.204 ************************************ 00:07:34.204 END TEST raid_state_function_test 00:07:34.204 ************************************ 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:07:34.204 13:31:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:34.204 13:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.204 13:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.204 ************************************ 00:07:34.204 START TEST raid_state_function_test_sb 00:07:34.204 ************************************ 00:07:34.204 13:31:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=50162 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50162' 00:07:34.204 Process raid pid: 50162 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:34.204 13:31:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50162 /var/tmp/spdk-raid.sock 00:07:34.204 13:31:13 -- common/autotest_common.sh@819 -- # '[' -z 50162 ']' 00:07:34.204 13:31:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:34.204 13:31:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:34.204 13:31:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:34.204 13:31:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.204 13:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.204 [2024-07-10 13:31:13.525701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.204 [2024-07-10 13:31:13.526020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:34.775 EAL: TSC is not safe to use in SMP mode 00:07:34.775 EAL: TSC is not invariant 00:07:34.775 [2024-07-10 13:31:13.971157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.775 [2024-07-10 13:31:14.065081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.775 [2024-07-10 13:31:14.065578] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.776 [2024-07-10 13:31:14.065589] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.343 13:31:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.343 13:31:14 -- common/autotest_common.sh@852 -- # return 0 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:35.343 [2024-07-10 13:31:14.668870] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.343 [2024-07-10 13:31:14.668923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.343 [2024-07-10 13:31:14.668927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.343 [2024-07-10 13:31:14.668933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.343 [2024-07-10 13:31:14.668936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:35.343 [2024-07-10 13:31:14.668941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.343 13:31:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.602 13:31:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:35.602 "name": "Existed_Raid", 00:07:35.602 "uuid": "ae0fcc86-3ec0-11ef-b9c4-5b09e08d4792", 00:07:35.602 "strip_size_kb": 64, 00:07:35.602 "state": "configuring", 00:07:35.602 "raid_level": "concat", 00:07:35.602 "superblock": true, 00:07:35.602 "num_base_bdevs": 3, 00:07:35.602 "num_base_bdevs_discovered": 0, 00:07:35.602 "num_base_bdevs_operational": 3, 00:07:35.602 "base_bdevs_list": [ 00:07:35.602 { 00:07:35.602 "name": "BaseBdev1", 00:07:35.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.602 "is_configured": false, 00:07:35.602 "data_offset": 0, 00:07:35.602 "data_size": 0 00:07:35.602 }, 00:07:35.602 { 00:07:35.602 "name": "BaseBdev2", 00:07:35.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.602 "is_configured": false, 00:07:35.602 "data_offset": 0, 00:07:35.602 "data_size": 0 00:07:35.602 }, 00:07:35.602 { 00:07:35.602 "name": "BaseBdev3", 00:07:35.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.602 "is_configured": false, 00:07:35.602 "data_offset": 0, 00:07:35.602 "data_size": 0 00:07:35.602 } 00:07:35.602 ] 00:07:35.602 }' 00:07:35.602 13:31:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:35.602 13:31:14 -- common/autotest_common.sh@10 -- # set +x 00:07:35.861 13:31:15 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:36.119 [2024-07-10 13:31:15.353101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.119 [2024-07-10 13:31:15.353121] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba88500 name Existed_Raid, state configuring 00:07:36.119 13:31:15 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:36.377 [2024-07-10 13:31:15.541162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.377 [2024-07-10 13:31:15.541200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.377 [2024-07-10 13:31:15.541203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.377 [2024-07-10 13:31:15.541209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.377 [2024-07-10 13:31:15.541211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.377 [2024-07-10 13:31:15.541217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.377 13:31:15 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.377 [2024-07-10 13:31:15.729986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.377 BaseBdev1 00:07:36.634 13:31:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:36.634 13:31:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:36.634 13:31:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:36.634 13:31:15 -- common/autotest_common.sh@889 -- # local i 00:07:36.634 13:31:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:36.634 13:31:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:36.634 13:31:15 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:36.634 13:31:15 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.892 [ 00:07:36.892 { 00:07:36.892 "name": "BaseBdev1", 00:07:36.892 "aliases": [ 00:07:36.892 "aeb19897-3ec0-11ef-b9c4-5b09e08d4792" 00:07:36.892 ], 00:07:36.892 "product_name": "Malloc disk", 00:07:36.892 "block_size": 512, 00:07:36.892 "num_blocks": 65536, 00:07:36.892 "uuid": "aeb19897-3ec0-11ef-b9c4-5b09e08d4792", 00:07:36.892 "assigned_rate_limits": { 00:07:36.892 "rw_ios_per_sec": 0, 00:07:36.892 "rw_mbytes_per_sec": 0, 00:07:36.892 "r_mbytes_per_sec": 0, 00:07:36.892 "w_mbytes_per_sec": 0 00:07:36.892 }, 00:07:36.892 "claimed": true, 00:07:36.892 "claim_type": "exclusive_write", 00:07:36.892 "zoned": false, 00:07:36.892 "supported_io_types": { 00:07:36.892 "read": true, 00:07:36.892 "write": true, 00:07:36.892 "unmap": true, 00:07:36.892 "write_zeroes": true, 00:07:36.892 "flush": true, 00:07:36.892 "reset": true, 00:07:36.892 "compare": false, 00:07:36.892 "compare_and_write": false, 00:07:36.892 "abort": true, 00:07:36.892 "nvme_admin": false, 00:07:36.892 "nvme_io": false 00:07:36.892 }, 00:07:36.892 "memory_domains": [ 00:07:36.892 { 00:07:36.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.892 "dma_device_type": 2 00:07:36.892 } 00:07:36.892 ], 00:07:36.892 "driver_specific": {} 00:07:36.892 } 00:07:36.892 ] 00:07:36.892 13:31:16 -- common/autotest_common.sh@895 -- # return 0 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:36.892 13:31:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:36.893 13:31:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.893 13:31:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.151 13:31:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:37.151 "name": "Existed_Raid", 00:07:37.151 "uuid": "ae94e693-3ec0-11ef-b9c4-5b09e08d4792", 00:07:37.151 "strip_size_kb": 64, 00:07:37.151 "state": "configuring", 00:07:37.151 "raid_level": "concat", 00:07:37.151 "superblock": true, 00:07:37.151 "num_base_bdevs": 3, 00:07:37.151 "num_base_bdevs_discovered": 1, 00:07:37.151 "num_base_bdevs_operational": 3, 00:07:37.151 "base_bdevs_list": [ 00:07:37.151 { 00:07:37.151 "name": "BaseBdev1", 00:07:37.151 "uuid": "aeb19897-3ec0-11ef-b9c4-5b09e08d4792", 00:07:37.151 "is_configured": true, 00:07:37.151 "data_offset": 2048, 00:07:37.151 "data_size": 63488 00:07:37.151 }, 00:07:37.151 { 00:07:37.151 "name": "BaseBdev2", 00:07:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.151 "is_configured": false, 00:07:37.151 "data_offset": 0, 00:07:37.151 "data_size": 0 00:07:37.151 }, 00:07:37.151 { 00:07:37.151 "name": "BaseBdev3", 00:07:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.151 "is_configured": false, 00:07:37.151 "data_offset": 0, 00:07:37.151 "data_size": 0 00:07:37.151 } 00:07:37.151 ] 00:07:37.151 }' 00:07:37.151 13:31:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:37.151 13:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:37.409 13:31:16 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:37.668 [2024-07-10 13:31:16.841551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.668 [2024-07-10 13:31:16.841581] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba88500 name Existed_Raid, state configuring 00:07:37.668 13:31:16 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:37.668 13:31:16 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:37.927 13:31:17 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.927 BaseBdev1 00:07:37.927 13:31:17 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:37.927 13:31:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:37.927 13:31:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:37.927 13:31:17 -- common/autotest_common.sh@889 -- # local i 00:07:37.927 13:31:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:37.927 13:31:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:37.927 13:31:17 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:38.185 13:31:17 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.443 [ 00:07:38.443 { 00:07:38.443 "name": "BaseBdev1", 00:07:38.443 "aliases": [ 00:07:38.443 "af956e83-3ec0-11ef-b9c4-5b09e08d4792" 00:07:38.443 ], 00:07:38.443 "product_name": "Malloc disk", 00:07:38.443 "block_size": 512, 00:07:38.443 "num_blocks": 65536, 00:07:38.443 "uuid": "af956e83-3ec0-11ef-b9c4-5b09e08d4792", 00:07:38.443 "assigned_rate_limits": { 00:07:38.443 "rw_ios_per_sec": 0, 00:07:38.443 "rw_mbytes_per_sec": 0, 00:07:38.443 "r_mbytes_per_sec": 0, 00:07:38.443 "w_mbytes_per_sec": 0 00:07:38.443 }, 00:07:38.443 "claimed": false, 00:07:38.443 "zoned": false, 00:07:38.443 "supported_io_types": { 00:07:38.443 "read": true, 00:07:38.443 "write": true, 00:07:38.444 "unmap": true, 00:07:38.444 "write_zeroes": true, 00:07:38.444 "flush": true, 00:07:38.444 "reset": true, 00:07:38.444 "compare": false, 00:07:38.444 "compare_and_write": false, 00:07:38.444 "abort": true, 00:07:38.444 "nvme_admin": false, 00:07:38.444 "nvme_io": false 00:07:38.444 }, 00:07:38.444 "memory_domains": [ 00:07:38.444 { 00:07:38.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.444 "dma_device_type": 2 00:07:38.444 } 00:07:38.444 ], 00:07:38.444 "driver_specific": {} 00:07:38.444 } 00:07:38.444 ] 00:07:38.444 13:31:17 -- common/autotest_common.sh@895 -- # return 0 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:38.444 [2024-07-10 13:31:17.782524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.444 [2024-07-10 13:31:17.782933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.444 [2024-07-10 13:31:17.782971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.444 [2024-07-10 13:31:17.782979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:38.444 [2024-07-10 13:31:17.782985] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.444 13:31:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.703 13:31:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:38.703 "name": "Existed_Raid", 00:07:38.703 "uuid": "afeae7b3-3ec0-11ef-b9c4-5b09e08d4792", 00:07:38.703 "strip_size_kb": 64, 00:07:38.703 "state": "configuring", 00:07:38.703 "raid_level": "concat", 00:07:38.703 "superblock": true, 00:07:38.703 "num_base_bdevs": 3, 00:07:38.703 "num_base_bdevs_discovered": 1, 00:07:38.703 "num_base_bdevs_operational": 3, 00:07:38.703 "base_bdevs_list": [ 00:07:38.703 { 00:07:38.703 "name": "BaseBdev1", 00:07:38.703 "uuid": "af956e83-3ec0-11ef-b9c4-5b09e08d4792", 00:07:38.703 "is_configured": true, 00:07:38.703 "data_offset": 2048, 00:07:38.703 "data_size": 63488 00:07:38.703 }, 00:07:38.703 { 00:07:38.703 "name": "BaseBdev2", 00:07:38.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.703 "is_configured": false, 00:07:38.703 "data_offset": 0, 00:07:38.703 "data_size": 0 00:07:38.703 }, 00:07:38.703 { 00:07:38.703 "name": "BaseBdev3", 00:07:38.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.703 "is_configured": false, 00:07:38.703 "data_offset": 0, 00:07:38.703 "data_size": 0 00:07:38.703 } 00:07:38.703 ] 00:07:38.703 }' 00:07:38.703 13:31:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:38.703 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:07:38.962 13:31:18 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.221 [2024-07-10 13:31:18.502820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.221 BaseBdev2 00:07:39.221 13:31:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:39.221 13:31:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:39.221 13:31:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:39.221 13:31:18 -- common/autotest_common.sh@889 -- # local i 00:07:39.221 13:31:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:39.221 13:31:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:39.221 13:31:18 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:39.479 13:31:18 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.738 [ 00:07:39.738 { 00:07:39.738 "name": "BaseBdev2", 00:07:39.738 "aliases": [ 00:07:39.738 "b058ccad-3ec0-11ef-b9c4-5b09e08d4792" 00:07:39.738 ], 00:07:39.738 "product_name": "Malloc disk", 00:07:39.738 "block_size": 512, 00:07:39.738 "num_blocks": 65536, 00:07:39.738 "uuid": "b058ccad-3ec0-11ef-b9c4-5b09e08d4792", 00:07:39.738 "assigned_rate_limits": { 00:07:39.738 "rw_ios_per_sec": 0, 00:07:39.738 "rw_mbytes_per_sec": 0, 00:07:39.738 "r_mbytes_per_sec": 0, 00:07:39.738 "w_mbytes_per_sec": 0 00:07:39.738 }, 00:07:39.738 "claimed": true, 00:07:39.738 "claim_type": "exclusive_write", 00:07:39.738 "zoned": false, 00:07:39.738 "supported_io_types": { 00:07:39.738 "read": true, 00:07:39.738 "write": true, 00:07:39.738 "unmap": true, 00:07:39.738 "write_zeroes": true, 00:07:39.738 "flush": true, 00:07:39.738 "reset": true, 00:07:39.738 "compare": false, 00:07:39.738 "compare_and_write": false, 00:07:39.738 "abort": true, 00:07:39.738 "nvme_admin": false, 00:07:39.738 "nvme_io": false 00:07:39.738 }, 00:07:39.738 "memory_domains": [ 00:07:39.738 { 00:07:39.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.738 "dma_device_type": 2 00:07:39.738 } 00:07:39.738 ], 00:07:39.738 "driver_specific": {} 00:07:39.738 } 00:07:39.738 ] 00:07:39.738 13:31:18 -- common/autotest_common.sh@895 -- # return 0 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.738 13:31:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.996 13:31:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:39.996 "name": "Existed_Raid", 00:07:39.996 "uuid": "afeae7b3-3ec0-11ef-b9c4-5b09e08d4792", 00:07:39.996 "strip_size_kb": 64, 00:07:39.996 "state": "configuring", 00:07:39.996 "raid_level": "concat", 00:07:39.996 "superblock": true, 00:07:39.996 "num_base_bdevs": 3, 00:07:39.996 "num_base_bdevs_discovered": 2, 00:07:39.996 "num_base_bdevs_operational": 3, 00:07:39.996 "base_bdevs_list": [ 00:07:39.996 { 00:07:39.996 "name": "BaseBdev1", 00:07:39.996 "uuid": "af956e83-3ec0-11ef-b9c4-5b09e08d4792", 00:07:39.997 "is_configured": true, 00:07:39.997 "data_offset": 2048, 00:07:39.997 "data_size": 63488 00:07:39.997 }, 00:07:39.997 { 00:07:39.997 "name": "BaseBdev2", 00:07:39.997 "uuid": "b058ccad-3ec0-11ef-b9c4-5b09e08d4792", 00:07:39.997 "is_configured": true, 00:07:39.997 "data_offset": 2048, 00:07:39.997 "data_size": 63488 00:07:39.997 }, 00:07:39.997 { 00:07:39.997 "name": "BaseBdev3", 00:07:39.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.997 "is_configured": false, 00:07:39.997 "data_offset": 0, 00:07:39.997 "data_size": 0 00:07:39.997 } 00:07:39.997 ] 00:07:39.997 }' 00:07:39.997 13:31:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:39.997 13:31:19 -- common/autotest_common.sh@10 -- # set +x 00:07:40.256 13:31:19 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:40.256 [2024-07-10 13:31:19.583113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:40.256 [2024-07-10 13:31:19.583166] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ba88a00 00:07:40.256 [2024-07-10 13:31:19.583171] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:40.256 [2024-07-10 13:31:19.583187] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82baebec0 00:07:40.256 [2024-07-10 13:31:19.583218] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ba88a00 00:07:40.256 [2024-07-10 13:31:19.583221] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ba88a00 00:07:40.256 [2024-07-10 13:31:19.583235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.256 BaseBdev3 00:07:40.256 13:31:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:40.256 13:31:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:40.256 13:31:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:40.256 13:31:19 -- common/autotest_common.sh@889 -- # local i 00:07:40.256 13:31:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:40.256 13:31:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:40.256 13:31:19 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:40.515 13:31:19 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:40.831 [ 00:07:40.831 { 00:07:40.831 "name": "BaseBdev3", 00:07:40.831 "aliases": [ 00:07:40.831 "b0fda4b0-3ec0-11ef-b9c4-5b09e08d4792" 00:07:40.831 ], 00:07:40.831 "product_name": "Malloc disk", 00:07:40.831 "block_size": 512, 00:07:40.831 "num_blocks": 65536, 00:07:40.831 "uuid": "b0fda4b0-3ec0-11ef-b9c4-5b09e08d4792", 00:07:40.831 "assigned_rate_limits": { 00:07:40.831 "rw_ios_per_sec": 0, 00:07:40.831 "rw_mbytes_per_sec": 0, 00:07:40.831 "r_mbytes_per_sec": 0, 00:07:40.831 "w_mbytes_per_sec": 0 00:07:40.831 }, 00:07:40.831 "claimed": true, 00:07:40.831 "claim_type": "exclusive_write", 00:07:40.831 "zoned": false, 00:07:40.831 "supported_io_types": { 00:07:40.831 "read": true, 00:07:40.831 "write": true, 00:07:40.831 "unmap": true, 00:07:40.831 "write_zeroes": true, 00:07:40.831 "flush": true, 00:07:40.831 "reset": true, 00:07:40.831 "compare": false, 00:07:40.831 "compare_and_write": false, 00:07:40.831 "abort": true, 00:07:40.831 "nvme_admin": false, 00:07:40.831 "nvme_io": false 00:07:40.831 }, 00:07:40.831 "memory_domains": [ 00:07:40.831 { 00:07:40.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.831 "dma_device_type": 2 00:07:40.831 } 00:07:40.831 ], 00:07:40.831 "driver_specific": {} 00:07:40.831 } 00:07:40.831 ] 00:07:40.831 13:31:19 -- common/autotest_common.sh@895 -- # return 0 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.831 13:31:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.831 13:31:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:40.831 "name": "Existed_Raid", 00:07:40.831 "uuid": "afeae7b3-3ec0-11ef-b9c4-5b09e08d4792", 00:07:40.831 "strip_size_kb": 64, 00:07:40.831 "state": "online", 00:07:40.831 "raid_level": "concat", 00:07:40.831 "superblock": true, 00:07:40.831 "num_base_bdevs": 3, 00:07:40.831 "num_base_bdevs_discovered": 3, 00:07:40.831 "num_base_bdevs_operational": 3, 00:07:40.831 "base_bdevs_list": [ 00:07:40.831 { 00:07:40.831 "name": "BaseBdev1", 00:07:40.831 "uuid": "af956e83-3ec0-11ef-b9c4-5b09e08d4792", 00:07:40.831 "is_configured": true, 00:07:40.831 "data_offset": 2048, 00:07:40.831 "data_size": 63488 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "name": "BaseBdev2", 00:07:40.831 "uuid": "b058ccad-3ec0-11ef-b9c4-5b09e08d4792", 00:07:40.831 "is_configured": true, 00:07:40.831 "data_offset": 2048, 00:07:40.831 "data_size": 63488 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "name": "BaseBdev3", 00:07:40.831 "uuid": "b0fda4b0-3ec0-11ef-b9c4-5b09e08d4792", 00:07:40.831 "is_configured": true, 00:07:40.831 "data_offset": 2048, 00:07:40.831 "data_size": 63488 00:07:40.831 } 00:07:40.831 ] 00:07:40.831 }' 00:07:40.831 13:31:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:40.831 13:31:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.089 13:31:20 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:41.347 [2024-07-10 13:31:20.643332] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.347 [2024-07-10 13:31:20.643354] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.347 [2024-07-10 13:31:20.643364] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.348 13:31:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.606 13:31:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:41.606 "name": "Existed_Raid", 00:07:41.606 "uuid": "afeae7b3-3ec0-11ef-b9c4-5b09e08d4792", 00:07:41.606 "strip_size_kb": 64, 00:07:41.606 "state": "offline", 00:07:41.606 "raid_level": "concat", 00:07:41.606 "superblock": true, 00:07:41.606 "num_base_bdevs": 3, 00:07:41.606 "num_base_bdevs_discovered": 2, 00:07:41.606 "num_base_bdevs_operational": 2, 00:07:41.606 "base_bdevs_list": [ 00:07:41.606 { 00:07:41.606 "name": null, 00:07:41.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.606 "is_configured": false, 00:07:41.606 "data_offset": 2048, 00:07:41.606 "data_size": 63488 00:07:41.606 }, 00:07:41.606 { 00:07:41.606 "name": "BaseBdev2", 00:07:41.606 "uuid": "b058ccad-3ec0-11ef-b9c4-5b09e08d4792", 00:07:41.606 "is_configured": true, 00:07:41.606 "data_offset": 2048, 00:07:41.606 "data_size": 63488 00:07:41.606 }, 00:07:41.606 { 00:07:41.606 "name": "BaseBdev3", 00:07:41.606 "uuid": "b0fda4b0-3ec0-11ef-b9c4-5b09e08d4792", 00:07:41.606 "is_configured": true, 00:07:41.606 "data_offset": 2048, 00:07:41.606 "data_size": 63488 00:07:41.606 } 00:07:41.606 ] 00:07:41.606 }' 00:07:41.606 13:31:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:41.606 13:31:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:41.865 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:41.865 13:31:21 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.865 13:31:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:42.124 13:31:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:42.124 13:31:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.124 13:31:21 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:42.382 [2024-07-10 13:31:21.632331] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.382 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:42.382 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:42.382 13:31:21 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.382 13:31:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:42.641 13:31:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:42.641 13:31:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.641 13:31:21 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:42.641 [2024-07-10 13:31:21.981156] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:42.641 [2024-07-10 13:31:21.981182] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba88a00 name Existed_Raid, state offline 00:07:42.641 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:42.641 13:31:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:42.641 13:31:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.641 13:31:22 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.899 13:31:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:42.899 13:31:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:42.899 13:31:22 -- bdev/bdev_raid.sh@287 -- # killprocess 50162 00:07:42.899 13:31:22 -- common/autotest_common.sh@926 -- # '[' -z 50162 ']' 00:07:42.899 13:31:22 -- common/autotest_common.sh@930 -- # kill -0 50162 00:07:42.899 13:31:22 -- common/autotest_common.sh@931 -- # uname 00:07:42.899 13:31:22 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:42.899 13:31:22 -- common/autotest_common.sh@934 -- # tail -1 00:07:42.899 13:31:22 -- common/autotest_common.sh@934 -- # ps -c -o command 50162 00:07:42.899 13:31:22 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:42.899 13:31:22 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:42.899 killing process with pid 50162 00:07:42.899 13:31:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50162' 00:07:42.899 13:31:22 -- common/autotest_common.sh@945 -- # kill 50162 00:07:42.899 [2024-07-10 13:31:22.204855] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.899 [2024-07-10 13:31:22.204889] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.899 13:31:22 -- common/autotest_common.sh@950 -- # wait 50162 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:43.158 00:07:43.158 real 0m8.849s 00:07:43.158 user 0m15.432s 00:07:43.158 sys 0m1.547s 00:07:43.158 13:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.158 13:31:22 -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 ************************************ 00:07:43.158 END TEST raid_state_function_test_sb 00:07:43.158 ************************************ 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:07:43.158 13:31:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:43.158 13:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.158 13:31:22 -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 ************************************ 00:07:43.158 START TEST raid_superblock_test 00:07:43.158 ************************************ 00:07:43.158 13:31:22 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:43.158 13:31:22 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@357 -- # raid_pid=50398 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50398 /var/tmp/spdk-raid.sock 00:07:43.159 13:31:22 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:43.159 13:31:22 -- common/autotest_common.sh@819 -- # '[' -z 50398 ']' 00:07:43.159 13:31:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:43.159 13:31:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:43.159 13:31:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:43.159 13:31:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.159 13:31:22 -- common/autotest_common.sh@10 -- # set +x 00:07:43.159 [2024-07-10 13:31:22.420782] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.159 [2024-07-10 13:31:22.421147] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:43.727 EAL: TSC is not safe to use in SMP mode 00:07:43.727 EAL: TSC is not invariant 00:07:43.727 [2024-07-10 13:31:22.872731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.727 [2024-07-10 13:31:22.961844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.727 [2024-07-10 13:31:22.962328] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.727 [2024-07-10 13:31:22.962339] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.985 13:31:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.985 13:31:23 -- common/autotest_common.sh@852 -- # return 0 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.985 13:31:23 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:44.242 malloc1 00:07:44.242 13:31:23 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.500 [2024-07-10 13:31:23.693526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.500 [2024-07-10 13:31:23.693580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.500 [2024-07-10 13:31:23.694130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c7780 00:07:44.500 [2024-07-10 13:31:23.694156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.500 [2024-07-10 13:31:23.694860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.500 [2024-07-10 13:31:23.694888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.500 pt1 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.500 13:31:23 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:44.760 malloc2 00:07:44.760 13:31:23 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.760 [2024-07-10 13:31:24.081624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.760 [2024-07-10 13:31:24.081676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.760 [2024-07-10 13:31:24.081715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c7c80 00:07:44.760 [2024-07-10 13:31:24.081723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.760 [2024-07-10 13:31:24.082159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.760 [2024-07-10 13:31:24.082187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.760 pt2 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.760 13:31:24 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:07:45.020 malloc3 00:07:45.020 13:31:24 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:45.280 [2024-07-10 13:31:24.541735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:45.280 [2024-07-10 13:31:24.541787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.280 [2024-07-10 13:31:24.541810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c8180 00:07:45.280 [2024-07-10 13:31:24.541816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.280 [2024-07-10 13:31:24.542279] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.280 [2024-07-10 13:31:24.542307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:45.280 pt3 00:07:45.280 13:31:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:45.280 13:31:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:45.280 13:31:24 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:07:45.540 [2024-07-10 13:31:24.733789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.540 [2024-07-10 13:31:24.734189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.540 [2024-07-10 13:31:24.734208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:45.540 [2024-07-10 13:31:24.734259] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d1c8400 00:07:45.540 [2024-07-10 13:31:24.734264] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:45.540 [2024-07-10 13:31:24.734292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d22ae20 00:07:45.540 [2024-07-10 13:31:24.734344] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d1c8400 00:07:45.540 [2024-07-10 13:31:24.734352] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d1c8400 00:07:45.540 [2024-07-10 13:31:24.734371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.540 13:31:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.799 13:31:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:45.799 "name": "raid_bdev1", 00:07:45.799 "uuid": "b40f9576-3ec0-11ef-b9c4-5b09e08d4792", 00:07:45.799 "strip_size_kb": 64, 00:07:45.799 "state": "online", 00:07:45.799 "raid_level": "concat", 00:07:45.799 "superblock": true, 00:07:45.799 "num_base_bdevs": 3, 00:07:45.799 "num_base_bdevs_discovered": 3, 00:07:45.799 "num_base_bdevs_operational": 3, 00:07:45.799 "base_bdevs_list": [ 00:07:45.799 { 00:07:45.799 "name": "pt1", 00:07:45.799 "uuid": "83bb9373-f385-e65a-80a0-15e5c48aa2e6", 00:07:45.799 "is_configured": true, 00:07:45.799 "data_offset": 2048, 00:07:45.799 "data_size": 63488 00:07:45.799 }, 00:07:45.799 { 00:07:45.799 "name": "pt2", 00:07:45.799 "uuid": "061d68e5-02ed-b254-a990-58c8ef43db45", 00:07:45.799 "is_configured": true, 00:07:45.799 "data_offset": 2048, 00:07:45.799 "data_size": 63488 00:07:45.799 }, 00:07:45.799 { 00:07:45.799 "name": "pt3", 00:07:45.799 "uuid": "d1921027-1208-6e52-8f38-5d872f0502cd", 00:07:45.799 "is_configured": true, 00:07:45.799 "data_offset": 2048, 00:07:45.799 "data_size": 63488 00:07:45.799 } 00:07:45.799 ] 00:07:45.799 }' 00:07:45.799 13:31:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:45.799 13:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 13:31:25 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:46.058 13:31:25 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:46.316 [2024-07-10 13:31:25.429965] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.316 13:31:25 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b40f9576-3ec0-11ef-b9c4-5b09e08d4792 00:07:46.316 13:31:25 -- bdev/bdev_raid.sh@380 -- # '[' -z b40f9576-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:46.316 13:31:25 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:46.316 [2024-07-10 13:31:25.633978] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.316 [2024-07-10 13:31:25.633998] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.316 [2024-07-10 13:31:25.634010] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.316 [2024-07-10 13:31:25.634035] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.316 [2024-07-10 13:31:25.634039] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d1c8400 name raid_bdev1, state offline 00:07:46.316 13:31:25 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.316 13:31:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:46.575 13:31:25 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:46.575 13:31:25 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:46.575 13:31:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.575 13:31:25 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:46.834 13:31:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.834 13:31:26 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:47.112 13:31:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.112 13:31:26 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:07:47.372 13:31:26 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:47.372 13:31:26 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:47.372 13:31:26 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:47.372 13:31:26 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:47.372 13:31:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:47.372 13:31:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:47.372 13:31:26 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.372 13:31:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.372 13:31:26 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.372 13:31:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.372 13:31:26 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.372 13:31:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.372 13:31:26 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.372 13:31:26 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:47.372 13:31:26 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:47.630 [2024-07-10 13:31:26.874292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:47.630 [2024-07-10 13:31:26.874724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:47.630 [2024-07-10 13:31:26.874741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:47.630 [2024-07-10 13:31:26.874752] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:47.630 [2024-07-10 13:31:26.874785] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:47.630 [2024-07-10 13:31:26.874793] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:07:47.630 [2024-07-10 13:31:26.874800] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.630 [2024-07-10 13:31:26.874804] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d1c8180 name raid_bdev1, state configuring 00:07:47.630 request: 00:07:47.630 { 00:07:47.630 "name": "raid_bdev1", 00:07:47.630 "raid_level": "concat", 00:07:47.630 "base_bdevs": [ 00:07:47.630 "malloc1", 00:07:47.630 "malloc2", 00:07:47.630 "malloc3" 00:07:47.630 ], 00:07:47.630 "superblock": false, 00:07:47.630 "strip_size_kb": 64, 00:07:47.630 "method": "bdev_raid_create", 00:07:47.630 "req_id": 1 00:07:47.630 } 00:07:47.630 Got JSON-RPC error response 00:07:47.630 response: 00:07:47.630 { 00:07:47.630 "code": -17, 00:07:47.630 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:47.630 } 00:07:47.630 13:31:26 -- common/autotest_common.sh@643 -- # es=1 00:07:47.630 13:31:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:47.630 13:31:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:47.630 13:31:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:47.630 13:31:26 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.630 13:31:26 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:47.888 13:31:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:47.888 13:31:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:47.888 13:31:27 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.146 [2024-07-10 13:31:27.294387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.146 [2024-07-10 13:31:27.294435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.146 [2024-07-10 13:31:27.294459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c7c80 00:07:48.146 [2024-07-10 13:31:27.294466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.146 [2024-07-10 13:31:27.294971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.146 [2024-07-10 13:31:27.294997] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.146 [2024-07-10 13:31:27.295015] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:48.146 [2024-07-10 13:31:27.295023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.146 pt1 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.146 13:31:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.405 13:31:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:48.405 "name": "raid_bdev1", 00:07:48.405 "uuid": "b40f9576-3ec0-11ef-b9c4-5b09e08d4792", 00:07:48.405 "strip_size_kb": 64, 00:07:48.405 "state": "configuring", 00:07:48.405 "raid_level": "concat", 00:07:48.405 "superblock": true, 00:07:48.405 "num_base_bdevs": 3, 00:07:48.405 "num_base_bdevs_discovered": 1, 00:07:48.405 "num_base_bdevs_operational": 3, 00:07:48.405 "base_bdevs_list": [ 00:07:48.405 { 00:07:48.405 "name": "pt1", 00:07:48.405 "uuid": "83bb9373-f385-e65a-80a0-15e5c48aa2e6", 00:07:48.405 "is_configured": true, 00:07:48.405 "data_offset": 2048, 00:07:48.405 "data_size": 63488 00:07:48.405 }, 00:07:48.405 { 00:07:48.405 "name": null, 00:07:48.405 "uuid": "061d68e5-02ed-b254-a990-58c8ef43db45", 00:07:48.405 "is_configured": false, 00:07:48.405 "data_offset": 2048, 00:07:48.405 "data_size": 63488 00:07:48.405 }, 00:07:48.405 { 00:07:48.405 "name": null, 00:07:48.405 "uuid": "d1921027-1208-6e52-8f38-5d872f0502cd", 00:07:48.405 "is_configured": false, 00:07:48.405 "data_offset": 2048, 00:07:48.405 "data_size": 63488 00:07:48.405 } 00:07:48.405 ] 00:07:48.405 }' 00:07:48.405 13:31:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:48.405 13:31:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.663 13:31:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:07:48.663 13:31:27 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.921 [2024-07-10 13:31:28.126581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.921 [2024-07-10 13:31:28.126635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.921 [2024-07-10 13:31:28.126661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c8680 00:07:48.921 [2024-07-10 13:31:28.126667] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.921 [2024-07-10 13:31:28.126765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.921 [2024-07-10 13:31:28.126774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.921 [2024-07-10 13:31:28.126791] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:48.921 [2024-07-10 13:31:28.126798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.921 pt2 00:07:48.921 13:31:28 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:49.179 [2024-07-10 13:31:28.338632] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.179 13:31:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.437 13:31:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:49.437 "name": "raid_bdev1", 00:07:49.437 "uuid": "b40f9576-3ec0-11ef-b9c4-5b09e08d4792", 00:07:49.437 "strip_size_kb": 64, 00:07:49.437 "state": "configuring", 00:07:49.437 "raid_level": "concat", 00:07:49.437 "superblock": true, 00:07:49.437 "num_base_bdevs": 3, 00:07:49.437 "num_base_bdevs_discovered": 1, 00:07:49.437 "num_base_bdevs_operational": 3, 00:07:49.437 "base_bdevs_list": [ 00:07:49.437 { 00:07:49.437 "name": "pt1", 00:07:49.437 "uuid": "83bb9373-f385-e65a-80a0-15e5c48aa2e6", 00:07:49.437 "is_configured": true, 00:07:49.437 "data_offset": 2048, 00:07:49.437 "data_size": 63488 00:07:49.437 }, 00:07:49.437 { 00:07:49.437 "name": null, 00:07:49.437 "uuid": "061d68e5-02ed-b254-a990-58c8ef43db45", 00:07:49.437 "is_configured": false, 00:07:49.437 "data_offset": 2048, 00:07:49.437 "data_size": 63488 00:07:49.437 }, 00:07:49.437 { 00:07:49.437 "name": null, 00:07:49.437 "uuid": "d1921027-1208-6e52-8f38-5d872f0502cd", 00:07:49.437 "is_configured": false, 00:07:49.437 "data_offset": 2048, 00:07:49.437 "data_size": 63488 00:07:49.437 } 00:07:49.437 ] 00:07:49.437 }' 00:07:49.437 13:31:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:49.437 13:31:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.693 13:31:28 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:49.693 13:31:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:49.693 13:31:28 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.951 [2024-07-10 13:31:29.134816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.951 [2024-07-10 13:31:29.134860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.951 [2024-07-10 13:31:29.134884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c8680 00:07:49.951 [2024-07-10 13:31:29.134890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.951 [2024-07-10 13:31:29.134972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.951 [2024-07-10 13:31:29.134980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.951 [2024-07-10 13:31:29.134996] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:49.951 [2024-07-10 13:31:29.135002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.951 pt2 00:07:49.951 13:31:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:49.951 13:31:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:49.951 13:31:29 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:50.210 [2024-07-10 13:31:29.338859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:50.210 [2024-07-10 13:31:29.338895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.210 [2024-07-10 13:31:29.338909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1c8400 00:07:50.210 [2024-07-10 13:31:29.338916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.210 [2024-07-10 13:31:29.338975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.210 [2024-07-10 13:31:29.338983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:50.210 [2024-07-10 13:31:29.338997] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:07:50.210 [2024-07-10 13:31:29.339002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:50.210 [2024-07-10 13:31:29.339020] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d1c7780 00:07:50.210 [2024-07-10 13:31:29.339024] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:50.210 [2024-07-10 13:31:29.339042] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d22ae20 00:07:50.210 [2024-07-10 13:31:29.339079] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d1c7780 00:07:50.210 [2024-07-10 13:31:29.339082] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d1c7780 00:07:50.210 [2024-07-10 13:31:29.339098] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.210 pt3 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:50.210 "name": "raid_bdev1", 00:07:50.210 "uuid": "b40f9576-3ec0-11ef-b9c4-5b09e08d4792", 00:07:50.210 "strip_size_kb": 64, 00:07:50.210 "state": "online", 00:07:50.210 "raid_level": "concat", 00:07:50.210 "superblock": true, 00:07:50.210 "num_base_bdevs": 3, 00:07:50.210 "num_base_bdevs_discovered": 3, 00:07:50.210 "num_base_bdevs_operational": 3, 00:07:50.210 "base_bdevs_list": [ 00:07:50.210 { 00:07:50.210 "name": "pt1", 00:07:50.210 "uuid": "83bb9373-f385-e65a-80a0-15e5c48aa2e6", 00:07:50.210 "is_configured": true, 00:07:50.210 "data_offset": 2048, 00:07:50.210 "data_size": 63488 00:07:50.210 }, 00:07:50.210 { 00:07:50.210 "name": "pt2", 00:07:50.210 "uuid": "061d68e5-02ed-b254-a990-58c8ef43db45", 00:07:50.210 "is_configured": true, 00:07:50.210 "data_offset": 2048, 00:07:50.210 "data_size": 63488 00:07:50.210 }, 00:07:50.210 { 00:07:50.210 "name": "pt3", 00:07:50.210 "uuid": "d1921027-1208-6e52-8f38-5d872f0502cd", 00:07:50.210 "is_configured": true, 00:07:50.210 "data_offset": 2048, 00:07:50.210 "data_size": 63488 00:07:50.210 } 00:07:50.210 ] 00:07:50.210 }' 00:07:50.210 13:31:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:50.210 13:31:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.777 13:31:29 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:50.777 13:31:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:50.777 [2024-07-10 13:31:30.055097] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.777 13:31:30 -- bdev/bdev_raid.sh@430 -- # '[' b40f9576-3ec0-11ef-b9c4-5b09e08d4792 '!=' b40f9576-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:07:50.777 13:31:30 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:07:50.777 13:31:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:50.777 13:31:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:50.777 13:31:30 -- bdev/bdev_raid.sh@511 -- # killprocess 50398 00:07:50.777 13:31:30 -- common/autotest_common.sh@926 -- # '[' -z 50398 ']' 00:07:50.777 13:31:30 -- common/autotest_common.sh@930 -- # kill -0 50398 00:07:50.777 13:31:30 -- common/autotest_common.sh@931 -- # uname 00:07:50.777 13:31:30 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:50.777 13:31:30 -- common/autotest_common.sh@934 -- # ps -c -o command 50398 00:07:50.777 13:31:30 -- common/autotest_common.sh@934 -- # tail -1 00:07:50.777 13:31:30 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:50.777 13:31:30 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:50.777 killing process with pid 50398 00:07:50.777 13:31:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50398' 00:07:50.777 13:31:30 -- common/autotest_common.sh@945 -- # kill 50398 00:07:50.777 [2024-07-10 13:31:30.089579] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.777 [2024-07-10 13:31:30.089621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.777 [2024-07-10 13:31:30.089637] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.777 [2024-07-10 13:31:30.089642] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d1c7780 name raid_bdev1, state offline 00:07:50.777 13:31:30 -- common/autotest_common.sh@950 -- # wait 50398 00:07:50.777 [2024-07-10 13:31:30.117206] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:51.036 00:07:51.036 real 0m7.931s 00:07:51.036 user 0m13.669s 00:07:51.036 sys 0m1.423s 00:07:51.036 13:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.036 13:31:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.036 ************************************ 00:07:51.036 END TEST raid_superblock_test 00:07:51.036 ************************************ 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:07:51.036 13:31:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:51.036 13:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.036 13:31:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.036 ************************************ 00:07:51.036 START TEST raid_state_function_test 00:07:51.036 ************************************ 00:07:51.036 13:31:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:51.036 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=50579 00:07:51.037 Process raid pid: 50579 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50579' 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50579 /var/tmp/spdk-raid.sock 00:07:51.037 13:31:30 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:51.037 13:31:30 -- common/autotest_common.sh@819 -- # '[' -z 50579 ']' 00:07:51.037 13:31:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:51.037 13:31:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:51.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:51.037 13:31:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:51.037 13:31:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:51.037 13:31:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.295 [2024-07-10 13:31:30.409586] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:51.295 [2024-07-10 13:31:30.409949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:51.555 EAL: TSC is not safe to use in SMP mode 00:07:51.555 EAL: TSC is not invariant 00:07:51.555 [2024-07-10 13:31:30.863375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.815 [2024-07-10 13:31:30.983152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.815 [2024-07-10 13:31:30.983639] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.815 [2024-07-10 13:31:30.983648] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.074 13:31:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:52.074 13:31:31 -- common/autotest_common.sh@852 -- # return 0 00:07:52.074 13:31:31 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:52.333 [2024-07-10 13:31:31.547248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.333 [2024-07-10 13:31:31.547316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.333 [2024-07-10 13:31:31.547321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.333 [2024-07-10 13:31:31.547329] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.333 [2024-07-10 13:31:31.547332] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:52.333 [2024-07-10 13:31:31.547339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.333 13:31:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.593 13:31:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:52.593 "name": "Existed_Raid", 00:07:52.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.593 "strip_size_kb": 0, 00:07:52.593 "state": "configuring", 00:07:52.593 "raid_level": "raid1", 00:07:52.593 "superblock": false, 00:07:52.593 "num_base_bdevs": 3, 00:07:52.593 "num_base_bdevs_discovered": 0, 00:07:52.593 "num_base_bdevs_operational": 3, 00:07:52.593 "base_bdevs_list": [ 00:07:52.593 { 00:07:52.593 "name": "BaseBdev1", 00:07:52.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.593 "is_configured": false, 00:07:52.593 "data_offset": 0, 00:07:52.593 "data_size": 0 00:07:52.593 }, 00:07:52.593 { 00:07:52.593 "name": "BaseBdev2", 00:07:52.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.593 "is_configured": false, 00:07:52.593 "data_offset": 0, 00:07:52.593 "data_size": 0 00:07:52.593 }, 00:07:52.593 { 00:07:52.593 "name": "BaseBdev3", 00:07:52.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.593 "is_configured": false, 00:07:52.593 "data_offset": 0, 00:07:52.593 "data_size": 0 00:07:52.593 } 00:07:52.593 ] 00:07:52.593 }' 00:07:52.593 13:31:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:52.593 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.852 13:31:32 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:53.113 [2024-07-10 13:31:32.287374] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.113 [2024-07-10 13:31:32.287398] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cc500 name Existed_Raid, state configuring 00:07:53.113 13:31:32 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:53.373 [2024-07-10 13:31:32.495422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.373 [2024-07-10 13:31:32.495456] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.373 [2024-07-10 13:31:32.495460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.373 [2024-07-10 13:31:32.495467] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.373 [2024-07-10 13:31:32.495469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:53.373 [2024-07-10 13:31:32.495475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:53.373 13:31:32 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:53.373 [2024-07-10 13:31:32.696683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.373 BaseBdev1 00:07:53.373 13:31:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:53.373 13:31:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:53.373 13:31:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:53.373 13:31:32 -- common/autotest_common.sh@889 -- # local i 00:07:53.373 13:31:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:53.373 13:31:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:53.373 13:31:32 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:53.633 13:31:32 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.891 [ 00:07:53.891 { 00:07:53.891 "name": "BaseBdev1", 00:07:53.891 "aliases": [ 00:07:53.891 "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792" 00:07:53.891 ], 00:07:53.891 "product_name": "Malloc disk", 00:07:53.891 "block_size": 512, 00:07:53.891 "num_blocks": 65536, 00:07:53.891 "uuid": "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792", 00:07:53.891 "assigned_rate_limits": { 00:07:53.891 "rw_ios_per_sec": 0, 00:07:53.891 "rw_mbytes_per_sec": 0, 00:07:53.891 "r_mbytes_per_sec": 0, 00:07:53.891 "w_mbytes_per_sec": 0 00:07:53.891 }, 00:07:53.891 "claimed": true, 00:07:53.891 "claim_type": "exclusive_write", 00:07:53.891 "zoned": false, 00:07:53.891 "supported_io_types": { 00:07:53.891 "read": true, 00:07:53.891 "write": true, 00:07:53.891 "unmap": true, 00:07:53.891 "write_zeroes": true, 00:07:53.891 "flush": true, 00:07:53.891 "reset": true, 00:07:53.891 "compare": false, 00:07:53.891 "compare_and_write": false, 00:07:53.891 "abort": true, 00:07:53.891 "nvme_admin": false, 00:07:53.891 "nvme_io": false 00:07:53.891 }, 00:07:53.891 "memory_domains": [ 00:07:53.891 { 00:07:53.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.891 "dma_device_type": 2 00:07:53.891 } 00:07:53.891 ], 00:07:53.891 "driver_specific": {} 00:07:53.891 } 00:07:53.891 ] 00:07:53.892 13:31:33 -- common/autotest_common.sh@895 -- # return 0 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.892 13:31:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.151 13:31:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:54.151 "name": "Existed_Raid", 00:07:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.151 "strip_size_kb": 0, 00:07:54.151 "state": "configuring", 00:07:54.151 "raid_level": "raid1", 00:07:54.151 "superblock": false, 00:07:54.151 "num_base_bdevs": 3, 00:07:54.151 "num_base_bdevs_discovered": 1, 00:07:54.151 "num_base_bdevs_operational": 3, 00:07:54.151 "base_bdevs_list": [ 00:07:54.151 { 00:07:54.151 "name": "BaseBdev1", 00:07:54.151 "uuid": "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792", 00:07:54.151 "is_configured": true, 00:07:54.151 "data_offset": 0, 00:07:54.151 "data_size": 65536 00:07:54.151 }, 00:07:54.151 { 00:07:54.151 "name": "BaseBdev2", 00:07:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.151 "is_configured": false, 00:07:54.151 "data_offset": 0, 00:07:54.151 "data_size": 0 00:07:54.151 }, 00:07:54.151 { 00:07:54.151 "name": "BaseBdev3", 00:07:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.151 "is_configured": false, 00:07:54.151 "data_offset": 0, 00:07:54.151 "data_size": 0 00:07:54.151 } 00:07:54.151 ] 00:07:54.151 }' 00:07:54.151 13:31:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:54.151 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.414 13:31:33 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:54.679 [2024-07-10 13:31:33.799713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.679 [2024-07-10 13:31:33.799745] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cc500 name Existed_Raid, state configuring 00:07:54.679 13:31:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:54.679 13:31:33 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:54.679 [2024-07-10 13:31:33.987767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.679 [2024-07-10 13:31:33.988698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.679 [2024-07-10 13:31:33.988742] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.679 [2024-07-10 13:31:33.988746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:54.679 [2024-07-10 13:31:33.988753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.679 13:31:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.939 13:31:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:54.939 "name": "Existed_Raid", 00:07:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.939 "strip_size_kb": 0, 00:07:54.939 "state": "configuring", 00:07:54.939 "raid_level": "raid1", 00:07:54.939 "superblock": false, 00:07:54.939 "num_base_bdevs": 3, 00:07:54.939 "num_base_bdevs_discovered": 1, 00:07:54.939 "num_base_bdevs_operational": 3, 00:07:54.939 "base_bdevs_list": [ 00:07:54.939 { 00:07:54.939 "name": "BaseBdev1", 00:07:54.939 "uuid": "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792", 00:07:54.939 "is_configured": true, 00:07:54.939 "data_offset": 0, 00:07:54.939 "data_size": 65536 00:07:54.939 }, 00:07:54.939 { 00:07:54.939 "name": "BaseBdev2", 00:07:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.939 "is_configured": false, 00:07:54.939 "data_offset": 0, 00:07:54.939 "data_size": 0 00:07:54.939 }, 00:07:54.939 { 00:07:54.939 "name": "BaseBdev3", 00:07:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.939 "is_configured": false, 00:07:54.939 "data_offset": 0, 00:07:54.939 "data_size": 0 00:07:54.939 } 00:07:54.939 ] 00:07:54.939 }' 00:07:54.939 13:31:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:54.939 13:31:34 -- common/autotest_common.sh@10 -- # set +x 00:07:55.198 13:31:34 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.457 [2024-07-10 13:31:34.652057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.457 BaseBdev2 00:07:55.457 13:31:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:55.457 13:31:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:55.457 13:31:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:55.457 13:31:34 -- common/autotest_common.sh@889 -- # local i 00:07:55.457 13:31:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:55.457 13:31:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:55.457 13:31:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:55.715 13:31:34 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.715 [ 00:07:55.715 { 00:07:55.715 "name": "BaseBdev2", 00:07:55.715 "aliases": [ 00:07:55.715 "b9f8f856-3ec0-11ef-b9c4-5b09e08d4792" 00:07:55.715 ], 00:07:55.715 "product_name": "Malloc disk", 00:07:55.715 "block_size": 512, 00:07:55.715 "num_blocks": 65536, 00:07:55.715 "uuid": "b9f8f856-3ec0-11ef-b9c4-5b09e08d4792", 00:07:55.715 "assigned_rate_limits": { 00:07:55.715 "rw_ios_per_sec": 0, 00:07:55.715 "rw_mbytes_per_sec": 0, 00:07:55.715 "r_mbytes_per_sec": 0, 00:07:55.715 "w_mbytes_per_sec": 0 00:07:55.715 }, 00:07:55.715 "claimed": true, 00:07:55.715 "claim_type": "exclusive_write", 00:07:55.715 "zoned": false, 00:07:55.715 "supported_io_types": { 00:07:55.715 "read": true, 00:07:55.715 "write": true, 00:07:55.715 "unmap": true, 00:07:55.715 "write_zeroes": true, 00:07:55.715 "flush": true, 00:07:55.715 "reset": true, 00:07:55.715 "compare": false, 00:07:55.715 "compare_and_write": false, 00:07:55.715 "abort": true, 00:07:55.715 "nvme_admin": false, 00:07:55.715 "nvme_io": false 00:07:55.715 }, 00:07:55.715 "memory_domains": [ 00:07:55.715 { 00:07:55.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.715 "dma_device_type": 2 00:07:55.715 } 00:07:55.715 ], 00:07:55.715 "driver_specific": {} 00:07:55.715 } 00:07:55.715 ] 00:07:55.715 13:31:35 -- common/autotest_common.sh@895 -- # return 0 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.715 13:31:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.974 13:31:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:55.974 "name": "Existed_Raid", 00:07:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.974 "strip_size_kb": 0, 00:07:55.974 "state": "configuring", 00:07:55.974 "raid_level": "raid1", 00:07:55.974 "superblock": false, 00:07:55.974 "num_base_bdevs": 3, 00:07:55.974 "num_base_bdevs_discovered": 2, 00:07:55.974 "num_base_bdevs_operational": 3, 00:07:55.974 "base_bdevs_list": [ 00:07:55.974 { 00:07:55.974 "name": "BaseBdev1", 00:07:55.974 "uuid": "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792", 00:07:55.974 "is_configured": true, 00:07:55.974 "data_offset": 0, 00:07:55.974 "data_size": 65536 00:07:55.974 }, 00:07:55.974 { 00:07:55.974 "name": "BaseBdev2", 00:07:55.974 "uuid": "b9f8f856-3ec0-11ef-b9c4-5b09e08d4792", 00:07:55.974 "is_configured": true, 00:07:55.974 "data_offset": 0, 00:07:55.974 "data_size": 65536 00:07:55.974 }, 00:07:55.974 { 00:07:55.974 "name": "BaseBdev3", 00:07:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.974 "is_configured": false, 00:07:55.974 "data_offset": 0, 00:07:55.974 "data_size": 0 00:07:55.974 } 00:07:55.974 ] 00:07:55.974 }' 00:07:55.974 13:31:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:55.974 13:31:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 13:31:35 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:56.491 [2024-07-10 13:31:35.680226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:56.491 [2024-07-10 13:31:35.680253] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c0cca00 00:07:56.491 [2024-07-10 13:31:35.680257] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:56.491 [2024-07-10 13:31:35.680276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c12fec0 00:07:56.491 [2024-07-10 13:31:35.680375] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c0cca00 00:07:56.491 [2024-07-10 13:31:35.680379] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c0cca00 00:07:56.491 [2024-07-10 13:31:35.680407] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.491 BaseBdev3 00:07:56.491 13:31:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:56.491 13:31:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:56.491 13:31:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:56.491 13:31:35 -- common/autotest_common.sh@889 -- # local i 00:07:56.491 13:31:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:56.491 13:31:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:56.491 13:31:35 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:56.749 13:31:35 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:56.749 [ 00:07:56.749 { 00:07:56.749 "name": "BaseBdev3", 00:07:56.749 "aliases": [ 00:07:56.749 "ba95dcae-3ec0-11ef-b9c4-5b09e08d4792" 00:07:56.749 ], 00:07:56.749 "product_name": "Malloc disk", 00:07:56.749 "block_size": 512, 00:07:56.749 "num_blocks": 65536, 00:07:56.749 "uuid": "ba95dcae-3ec0-11ef-b9c4-5b09e08d4792", 00:07:56.749 "assigned_rate_limits": { 00:07:56.749 "rw_ios_per_sec": 0, 00:07:56.749 "rw_mbytes_per_sec": 0, 00:07:56.749 "r_mbytes_per_sec": 0, 00:07:56.749 "w_mbytes_per_sec": 0 00:07:56.749 }, 00:07:56.749 "claimed": true, 00:07:56.749 "claim_type": "exclusive_write", 00:07:56.749 "zoned": false, 00:07:56.749 "supported_io_types": { 00:07:56.749 "read": true, 00:07:56.749 "write": true, 00:07:56.749 "unmap": true, 00:07:56.749 "write_zeroes": true, 00:07:56.749 "flush": true, 00:07:56.749 "reset": true, 00:07:56.749 "compare": false, 00:07:56.749 "compare_and_write": false, 00:07:56.749 "abort": true, 00:07:56.749 "nvme_admin": false, 00:07:56.749 "nvme_io": false 00:07:56.749 }, 00:07:56.749 "memory_domains": [ 00:07:56.749 { 00:07:56.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.749 "dma_device_type": 2 00:07:56.749 } 00:07:56.749 ], 00:07:56.749 "driver_specific": {} 00:07:56.749 } 00:07:56.749 ] 00:07:56.749 13:31:36 -- common/autotest_common.sh@895 -- # return 0 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.749 13:31:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.007 13:31:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:57.007 "name": "Existed_Raid", 00:07:57.007 "uuid": "ba95e240-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.007 "strip_size_kb": 0, 00:07:57.007 "state": "online", 00:07:57.007 "raid_level": "raid1", 00:07:57.007 "superblock": false, 00:07:57.007 "num_base_bdevs": 3, 00:07:57.007 "num_base_bdevs_discovered": 3, 00:07:57.007 "num_base_bdevs_operational": 3, 00:07:57.007 "base_bdevs_list": [ 00:07:57.007 { 00:07:57.007 "name": "BaseBdev1", 00:07:57.007 "uuid": "b8ce7218-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.007 "is_configured": true, 00:07:57.007 "data_offset": 0, 00:07:57.007 "data_size": 65536 00:07:57.007 }, 00:07:57.007 { 00:07:57.007 "name": "BaseBdev2", 00:07:57.007 "uuid": "b9f8f856-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.007 "is_configured": true, 00:07:57.007 "data_offset": 0, 00:07:57.007 "data_size": 65536 00:07:57.007 }, 00:07:57.007 { 00:07:57.007 "name": "BaseBdev3", 00:07:57.007 "uuid": "ba95dcae-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.007 "is_configured": true, 00:07:57.007 "data_offset": 0, 00:07:57.007 "data_size": 65536 00:07:57.007 } 00:07:57.007 ] 00:07:57.007 }' 00:07:57.007 13:31:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:57.007 13:31:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.265 13:31:36 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:57.523 [2024-07-10 13:31:36.720411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.523 13:31:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.781 13:31:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:57.781 "name": "Existed_Raid", 00:07:57.781 "uuid": "ba95e240-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.781 "strip_size_kb": 0, 00:07:57.781 "state": "online", 00:07:57.781 "raid_level": "raid1", 00:07:57.781 "superblock": false, 00:07:57.781 "num_base_bdevs": 3, 00:07:57.781 "num_base_bdevs_discovered": 2, 00:07:57.781 "num_base_bdevs_operational": 2, 00:07:57.781 "base_bdevs_list": [ 00:07:57.781 { 00:07:57.781 "name": null, 00:07:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.781 "is_configured": false, 00:07:57.781 "data_offset": 0, 00:07:57.781 "data_size": 65536 00:07:57.781 }, 00:07:57.781 { 00:07:57.781 "name": "BaseBdev2", 00:07:57.781 "uuid": "b9f8f856-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.781 "is_configured": true, 00:07:57.781 "data_offset": 0, 00:07:57.781 "data_size": 65536 00:07:57.781 }, 00:07:57.781 { 00:07:57.781 "name": "BaseBdev3", 00:07:57.781 "uuid": "ba95dcae-3ec0-11ef-b9c4-5b09e08d4792", 00:07:57.781 "is_configured": true, 00:07:57.781 "data_offset": 0, 00:07:57.781 "data_size": 65536 00:07:57.781 } 00:07:57.781 ] 00:07:57.781 }' 00:07:57.781 13:31:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:57.781 13:31:36 -- common/autotest_common.sh@10 -- # set +x 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.039 13:31:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:58.298 [2024-07-10 13:31:37.573976] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.298 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:58.298 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:58.298 13:31:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.298 13:31:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:58.557 13:31:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:58.557 13:31:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.557 13:31:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:58.817 [2024-07-10 13:31:37.955301] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:58.817 [2024-07-10 13:31:37.955321] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.817 [2024-07-10 13:31:37.955333] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.817 [2024-07-10 13:31:37.964538] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.817 [2024-07-10 13:31:37.964546] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cca00 name Existed_Raid, state offline 00:07:58.817 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:58.817 13:31:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:58.817 13:31:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.817 13:31:37 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.817 13:31:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:58.817 13:31:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:58.817 13:31:38 -- bdev/bdev_raid.sh@287 -- # killprocess 50579 00:07:58.817 13:31:38 -- common/autotest_common.sh@926 -- # '[' -z 50579 ']' 00:07:58.817 13:31:38 -- common/autotest_common.sh@930 -- # kill -0 50579 00:07:58.817 13:31:38 -- common/autotest_common.sh@931 -- # uname 00:07:58.817 13:31:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:58.817 13:31:38 -- common/autotest_common.sh@934 -- # ps -c -o command 50579 00:07:58.817 13:31:38 -- common/autotest_common.sh@934 -- # tail -1 00:07:59.076 13:31:38 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:59.076 13:31:38 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:59.076 killing process with pid 50579 00:07:59.076 13:31:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50579' 00:07:59.076 13:31:38 -- common/autotest_common.sh@945 -- # kill 50579 00:07:59.076 [2024-07-10 13:31:38.186841] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.076 [2024-07-10 13:31:38.186900] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.076 13:31:38 -- common/autotest_common.sh@950 -- # wait 50579 00:07:59.076 13:31:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:59.076 00:07:59.076 real 0m8.025s 00:07:59.076 user 0m13.688s 00:07:59.076 sys 0m1.590s 00:07:59.076 13:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.076 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.076 ************************************ 00:07:59.076 END TEST raid_state_function_test 00:07:59.076 ************************************ 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:07:59.334 13:31:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:59.334 13:31:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.334 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.334 ************************************ 00:07:59.334 START TEST raid_state_function_test_sb 00:07:59.334 ************************************ 00:07:59.334 13:31:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:59.334 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=50812 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50812' 00:07:59.335 Process raid pid: 50812 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:59.335 13:31:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50812 /var/tmp/spdk-raid.sock 00:07:59.335 13:31:38 -- common/autotest_common.sh@819 -- # '[' -z 50812 ']' 00:07:59.335 13:31:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:59.335 13:31:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:59.335 13:31:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:59.335 13:31:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.335 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.335 [2024-07-10 13:31:38.488249] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:59.335 [2024-07-10 13:31:38.488590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:59.593 EAL: TSC is not safe to use in SMP mode 00:07:59.593 EAL: TSC is not invariant 00:07:59.593 [2024-07-10 13:31:38.918265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.852 [2024-07-10 13:31:39.008495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.852 [2024-07-10 13:31:39.008916] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.852 [2024-07-10 13:31:39.008926] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.111 13:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.111 13:31:39 -- common/autotest_common.sh@852 -- # return 0 00:08:00.111 13:31:39 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:00.371 [2024-07-10 13:31:39.575971] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.371 [2024-07-10 13:31:39.576025] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.371 [2024-07-10 13:31:39.576030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.371 [2024-07-10 13:31:39.576036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.371 [2024-07-10 13:31:39.576039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:00.371 [2024-07-10 13:31:39.576044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.371 13:31:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.631 13:31:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:00.631 "name": "Existed_Raid", 00:08:00.631 "uuid": "bce85221-3ec0-11ef-b9c4-5b09e08d4792", 00:08:00.631 "strip_size_kb": 0, 00:08:00.631 "state": "configuring", 00:08:00.631 "raid_level": "raid1", 00:08:00.631 "superblock": true, 00:08:00.631 "num_base_bdevs": 3, 00:08:00.631 "num_base_bdevs_discovered": 0, 00:08:00.631 "num_base_bdevs_operational": 3, 00:08:00.631 "base_bdevs_list": [ 00:08:00.631 { 00:08:00.631 "name": "BaseBdev1", 00:08:00.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.631 "is_configured": false, 00:08:00.631 "data_offset": 0, 00:08:00.631 "data_size": 0 00:08:00.631 }, 00:08:00.631 { 00:08:00.631 "name": "BaseBdev2", 00:08:00.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.631 "is_configured": false, 00:08:00.631 "data_offset": 0, 00:08:00.631 "data_size": 0 00:08:00.631 }, 00:08:00.631 { 00:08:00.631 "name": "BaseBdev3", 00:08:00.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.631 "is_configured": false, 00:08:00.631 "data_offset": 0, 00:08:00.631 "data_size": 0 00:08:00.631 } 00:08:00.631 ] 00:08:00.631 }' 00:08:00.631 13:31:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:00.632 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.891 13:31:40 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:00.891 [2024-07-10 13:31:40.244132] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.891 [2024-07-10 13:31:40.244156] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b467500 name Existed_Raid, state configuring 00:08:01.150 13:31:40 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:01.150 [2024-07-10 13:31:40.432207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.150 [2024-07-10 13:31:40.432247] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.150 [2024-07-10 13:31:40.432251] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.150 [2024-07-10 13:31:40.432257] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.150 [2024-07-10 13:31:40.432260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.150 [2024-07-10 13:31:40.432265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.150 13:31:40 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.410 [2024-07-10 13:31:40.621043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.410 BaseBdev1 00:08:01.410 13:31:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:01.410 13:31:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:01.410 13:31:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:01.410 13:31:40 -- common/autotest_common.sh@889 -- # local i 00:08:01.410 13:31:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:01.410 13:31:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:01.410 13:31:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:01.669 13:31:40 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.669 [ 00:08:01.669 { 00:08:01.669 "name": "BaseBdev1", 00:08:01.669 "aliases": [ 00:08:01.669 "bd87aaa9-3ec0-11ef-b9c4-5b09e08d4792" 00:08:01.669 ], 00:08:01.669 "product_name": "Malloc disk", 00:08:01.669 "block_size": 512, 00:08:01.669 "num_blocks": 65536, 00:08:01.669 "uuid": "bd87aaa9-3ec0-11ef-b9c4-5b09e08d4792", 00:08:01.669 "assigned_rate_limits": { 00:08:01.669 "rw_ios_per_sec": 0, 00:08:01.669 "rw_mbytes_per_sec": 0, 00:08:01.669 "r_mbytes_per_sec": 0, 00:08:01.669 "w_mbytes_per_sec": 0 00:08:01.669 }, 00:08:01.669 "claimed": true, 00:08:01.669 "claim_type": "exclusive_write", 00:08:01.669 "zoned": false, 00:08:01.669 "supported_io_types": { 00:08:01.669 "read": true, 00:08:01.669 "write": true, 00:08:01.669 "unmap": true, 00:08:01.669 "write_zeroes": true, 00:08:01.669 "flush": true, 00:08:01.669 "reset": true, 00:08:01.669 "compare": false, 00:08:01.669 "compare_and_write": false, 00:08:01.669 "abort": true, 00:08:01.669 "nvme_admin": false, 00:08:01.669 "nvme_io": false 00:08:01.669 }, 00:08:01.669 "memory_domains": [ 00:08:01.669 { 00:08:01.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.669 "dma_device_type": 2 00:08:01.669 } 00:08:01.669 ], 00:08:01.669 "driver_specific": {} 00:08:01.669 } 00:08:01.669 ] 00:08:01.669 13:31:41 -- common/autotest_common.sh@895 -- # return 0 00:08:01.669 13:31:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.670 13:31:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.929 13:31:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:01.929 "name": "Existed_Raid", 00:08:01.929 "uuid": "bd6af8f7-3ec0-11ef-b9c4-5b09e08d4792", 00:08:01.929 "strip_size_kb": 0, 00:08:01.929 "state": "configuring", 00:08:01.929 "raid_level": "raid1", 00:08:01.929 "superblock": true, 00:08:01.929 "num_base_bdevs": 3, 00:08:01.929 "num_base_bdevs_discovered": 1, 00:08:01.929 "num_base_bdevs_operational": 3, 00:08:01.929 "base_bdevs_list": [ 00:08:01.929 { 00:08:01.929 "name": "BaseBdev1", 00:08:01.929 "uuid": "bd87aaa9-3ec0-11ef-b9c4-5b09e08d4792", 00:08:01.929 "is_configured": true, 00:08:01.929 "data_offset": 2048, 00:08:01.929 "data_size": 63488 00:08:01.929 }, 00:08:01.929 { 00:08:01.929 "name": "BaseBdev2", 00:08:01.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.929 "is_configured": false, 00:08:01.929 "data_offset": 0, 00:08:01.929 "data_size": 0 00:08:01.929 }, 00:08:01.929 { 00:08:01.929 "name": "BaseBdev3", 00:08:01.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.929 "is_configured": false, 00:08:01.929 "data_offset": 0, 00:08:01.929 "data_size": 0 00:08:01.929 } 00:08:01.929 ] 00:08:01.929 }' 00:08:01.929 13:31:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:01.929 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 13:31:41 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:02.447 [2024-07-10 13:31:41.680553] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.447 [2024-07-10 13:31:41.680580] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b467500 name Existed_Raid, state configuring 00:08:02.447 13:31:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:02.447 13:31:41 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:02.707 13:31:41 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.707 BaseBdev1 00:08:02.966 13:31:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:02.966 13:31:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:02.966 13:31:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:02.966 13:31:42 -- common/autotest_common.sh@889 -- # local i 00:08:02.966 13:31:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:02.966 13:31:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:02.966 13:31:42 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:02.966 13:31:42 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.224 [ 00:08:03.224 { 00:08:03.224 "name": "BaseBdev1", 00:08:03.224 "aliases": [ 00:08:03.224 "be6429fe-3ec0-11ef-b9c4-5b09e08d4792" 00:08:03.224 ], 00:08:03.224 "product_name": "Malloc disk", 00:08:03.224 "block_size": 512, 00:08:03.224 "num_blocks": 65536, 00:08:03.224 "uuid": "be6429fe-3ec0-11ef-b9c4-5b09e08d4792", 00:08:03.224 "assigned_rate_limits": { 00:08:03.224 "rw_ios_per_sec": 0, 00:08:03.224 "rw_mbytes_per_sec": 0, 00:08:03.224 "r_mbytes_per_sec": 0, 00:08:03.224 "w_mbytes_per_sec": 0 00:08:03.224 }, 00:08:03.224 "claimed": false, 00:08:03.224 "zoned": false, 00:08:03.224 "supported_io_types": { 00:08:03.224 "read": true, 00:08:03.224 "write": true, 00:08:03.224 "unmap": true, 00:08:03.224 "write_zeroes": true, 00:08:03.224 "flush": true, 00:08:03.224 "reset": true, 00:08:03.224 "compare": false, 00:08:03.224 "compare_and_write": false, 00:08:03.224 "abort": true, 00:08:03.224 "nvme_admin": false, 00:08:03.224 "nvme_io": false 00:08:03.224 }, 00:08:03.224 "memory_domains": [ 00:08:03.224 { 00:08:03.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.224 "dma_device_type": 2 00:08:03.224 } 00:08:03.224 ], 00:08:03.224 "driver_specific": {} 00:08:03.224 } 00:08:03.224 ] 00:08:03.224 13:31:42 -- common/autotest_common.sh@895 -- # return 0 00:08:03.224 13:31:42 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:03.483 [2024-07-10 13:31:42.641493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.483 [2024-07-10 13:31:42.641882] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.483 [2024-07-10 13:31:42.641923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.483 [2024-07-10 13:31:42.641927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:03.483 [2024-07-10 13:31:42.641933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.483 13:31:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.746 13:31:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:03.746 "name": "Existed_Raid", 00:08:03.746 "uuid": "bebc152e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:03.746 "strip_size_kb": 0, 00:08:03.746 "state": "configuring", 00:08:03.746 "raid_level": "raid1", 00:08:03.746 "superblock": true, 00:08:03.746 "num_base_bdevs": 3, 00:08:03.746 "num_base_bdevs_discovered": 1, 00:08:03.746 "num_base_bdevs_operational": 3, 00:08:03.746 "base_bdevs_list": [ 00:08:03.746 { 00:08:03.746 "name": "BaseBdev1", 00:08:03.746 "uuid": "be6429fe-3ec0-11ef-b9c4-5b09e08d4792", 00:08:03.746 "is_configured": true, 00:08:03.746 "data_offset": 2048, 00:08:03.746 "data_size": 63488 00:08:03.746 }, 00:08:03.746 { 00:08:03.746 "name": "BaseBdev2", 00:08:03.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.746 "is_configured": false, 00:08:03.746 "data_offset": 0, 00:08:03.746 "data_size": 0 00:08:03.746 }, 00:08:03.746 { 00:08:03.746 "name": "BaseBdev3", 00:08:03.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.746 "is_configured": false, 00:08:03.746 "data_offset": 0, 00:08:03.746 "data_size": 0 00:08:03.746 } 00:08:03.746 ] 00:08:03.746 }' 00:08:03.746 13:31:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:03.746 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:04.007 13:31:43 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.007 [2024-07-10 13:31:43.317771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.007 BaseBdev2 00:08:04.007 13:31:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:04.007 13:31:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:04.007 13:31:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:04.007 13:31:43 -- common/autotest_common.sh@889 -- # local i 00:08:04.007 13:31:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:04.007 13:31:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:04.007 13:31:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:04.265 13:31:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.524 [ 00:08:04.524 { 00:08:04.524 "name": "BaseBdev2", 00:08:04.524 "aliases": [ 00:08:04.524 "bf2342cd-3ec0-11ef-b9c4-5b09e08d4792" 00:08:04.524 ], 00:08:04.524 "product_name": "Malloc disk", 00:08:04.524 "block_size": 512, 00:08:04.524 "num_blocks": 65536, 00:08:04.524 "uuid": "bf2342cd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:04.524 "assigned_rate_limits": { 00:08:04.524 "rw_ios_per_sec": 0, 00:08:04.524 "rw_mbytes_per_sec": 0, 00:08:04.524 "r_mbytes_per_sec": 0, 00:08:04.524 "w_mbytes_per_sec": 0 00:08:04.524 }, 00:08:04.524 "claimed": true, 00:08:04.524 "claim_type": "exclusive_write", 00:08:04.524 "zoned": false, 00:08:04.524 "supported_io_types": { 00:08:04.524 "read": true, 00:08:04.524 "write": true, 00:08:04.524 "unmap": true, 00:08:04.524 "write_zeroes": true, 00:08:04.524 "flush": true, 00:08:04.524 "reset": true, 00:08:04.524 "compare": false, 00:08:04.524 "compare_and_write": false, 00:08:04.524 "abort": true, 00:08:04.524 "nvme_admin": false, 00:08:04.524 "nvme_io": false 00:08:04.524 }, 00:08:04.524 "memory_domains": [ 00:08:04.524 { 00:08:04.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.524 "dma_device_type": 2 00:08:04.524 } 00:08:04.524 ], 00:08:04.524 "driver_specific": {} 00:08:04.524 } 00:08:04.524 ] 00:08:04.524 13:31:43 -- common/autotest_common.sh@895 -- # return 0 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:04.524 "name": "Existed_Raid", 00:08:04.524 "uuid": "bebc152e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:04.524 "strip_size_kb": 0, 00:08:04.524 "state": "configuring", 00:08:04.524 "raid_level": "raid1", 00:08:04.524 "superblock": true, 00:08:04.524 "num_base_bdevs": 3, 00:08:04.524 "num_base_bdevs_discovered": 2, 00:08:04.524 "num_base_bdevs_operational": 3, 00:08:04.524 "base_bdevs_list": [ 00:08:04.524 { 00:08:04.524 "name": "BaseBdev1", 00:08:04.524 "uuid": "be6429fe-3ec0-11ef-b9c4-5b09e08d4792", 00:08:04.524 "is_configured": true, 00:08:04.524 "data_offset": 2048, 00:08:04.524 "data_size": 63488 00:08:04.524 }, 00:08:04.524 { 00:08:04.524 "name": "BaseBdev2", 00:08:04.524 "uuid": "bf2342cd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:04.524 "is_configured": true, 00:08:04.524 "data_offset": 2048, 00:08:04.524 "data_size": 63488 00:08:04.524 }, 00:08:04.524 { 00:08:04.524 "name": "BaseBdev3", 00:08:04.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.524 "is_configured": false, 00:08:04.524 "data_offset": 0, 00:08:04.524 "data_size": 0 00:08:04.524 } 00:08:04.524 ] 00:08:04.524 }' 00:08:04.524 13:31:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:04.524 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:05.093 13:31:44 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:05.093 [2024-07-10 13:31:44.346014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.093 [2024-07-10 13:31:44.346067] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b467a00 00:08:05.093 [2024-07-10 13:31:44.346072] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.093 [2024-07-10 13:31:44.346088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4caec0 00:08:05.093 [2024-07-10 13:31:44.346122] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b467a00 00:08:05.093 [2024-07-10 13:31:44.346125] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b467a00 00:08:05.093 [2024-07-10 13:31:44.346140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.093 BaseBdev3 00:08:05.093 13:31:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:05.093 13:31:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:05.093 13:31:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:05.093 13:31:44 -- common/autotest_common.sh@889 -- # local i 00:08:05.093 13:31:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:05.093 13:31:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:05.093 13:31:44 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:05.361 13:31:44 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:05.633 [ 00:08:05.633 { 00:08:05.633 "name": "BaseBdev3", 00:08:05.633 "aliases": [ 00:08:05.633 "bfc029d6-3ec0-11ef-b9c4-5b09e08d4792" 00:08:05.633 ], 00:08:05.633 "product_name": "Malloc disk", 00:08:05.633 "block_size": 512, 00:08:05.633 "num_blocks": 65536, 00:08:05.633 "uuid": "bfc029d6-3ec0-11ef-b9c4-5b09e08d4792", 00:08:05.633 "assigned_rate_limits": { 00:08:05.633 "rw_ios_per_sec": 0, 00:08:05.633 "rw_mbytes_per_sec": 0, 00:08:05.633 "r_mbytes_per_sec": 0, 00:08:05.633 "w_mbytes_per_sec": 0 00:08:05.633 }, 00:08:05.633 "claimed": true, 00:08:05.633 "claim_type": "exclusive_write", 00:08:05.633 "zoned": false, 00:08:05.633 "supported_io_types": { 00:08:05.633 "read": true, 00:08:05.633 "write": true, 00:08:05.633 "unmap": true, 00:08:05.633 "write_zeroes": true, 00:08:05.633 "flush": true, 00:08:05.633 "reset": true, 00:08:05.633 "compare": false, 00:08:05.633 "compare_and_write": false, 00:08:05.633 "abort": true, 00:08:05.633 "nvme_admin": false, 00:08:05.633 "nvme_io": false 00:08:05.633 }, 00:08:05.633 "memory_domains": [ 00:08:05.633 { 00:08:05.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.633 "dma_device_type": 2 00:08:05.633 } 00:08:05.633 ], 00:08:05.633 "driver_specific": {} 00:08:05.633 } 00:08:05.633 ] 00:08:05.633 13:31:44 -- common/autotest_common.sh@895 -- # return 0 00:08:05.633 13:31:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:05.633 13:31:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:05.634 "name": "Existed_Raid", 00:08:05.634 "uuid": "bebc152e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:05.634 "strip_size_kb": 0, 00:08:05.634 "state": "online", 00:08:05.634 "raid_level": "raid1", 00:08:05.634 "superblock": true, 00:08:05.634 "num_base_bdevs": 3, 00:08:05.634 "num_base_bdevs_discovered": 3, 00:08:05.634 "num_base_bdevs_operational": 3, 00:08:05.634 "base_bdevs_list": [ 00:08:05.634 { 00:08:05.634 "name": "BaseBdev1", 00:08:05.634 "uuid": "be6429fe-3ec0-11ef-b9c4-5b09e08d4792", 00:08:05.634 "is_configured": true, 00:08:05.634 "data_offset": 2048, 00:08:05.634 "data_size": 63488 00:08:05.634 }, 00:08:05.634 { 00:08:05.634 "name": "BaseBdev2", 00:08:05.634 "uuid": "bf2342cd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:05.634 "is_configured": true, 00:08:05.634 "data_offset": 2048, 00:08:05.634 "data_size": 63488 00:08:05.634 }, 00:08:05.634 { 00:08:05.634 "name": "BaseBdev3", 00:08:05.634 "uuid": "bfc029d6-3ec0-11ef-b9c4-5b09e08d4792", 00:08:05.634 "is_configured": true, 00:08:05.634 "data_offset": 2048, 00:08:05.634 "data_size": 63488 00:08:05.634 } 00:08:05.634 ] 00:08:05.634 }' 00:08:05.634 13:31:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:05.634 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.893 13:31:45 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:06.152 [2024-07-10 13:31:45.394226] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.152 13:31:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.411 13:31:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:06.411 "name": "Existed_Raid", 00:08:06.411 "uuid": "bebc152e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:06.411 "strip_size_kb": 0, 00:08:06.411 "state": "online", 00:08:06.411 "raid_level": "raid1", 00:08:06.411 "superblock": true, 00:08:06.411 "num_base_bdevs": 3, 00:08:06.411 "num_base_bdevs_discovered": 2, 00:08:06.411 "num_base_bdevs_operational": 2, 00:08:06.411 "base_bdevs_list": [ 00:08:06.411 { 00:08:06.411 "name": null, 00:08:06.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.411 "is_configured": false, 00:08:06.411 "data_offset": 2048, 00:08:06.411 "data_size": 63488 00:08:06.411 }, 00:08:06.411 { 00:08:06.411 "name": "BaseBdev2", 00:08:06.411 "uuid": "bf2342cd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:06.411 "is_configured": true, 00:08:06.411 "data_offset": 2048, 00:08:06.411 "data_size": 63488 00:08:06.411 }, 00:08:06.411 { 00:08:06.411 "name": "BaseBdev3", 00:08:06.411 "uuid": "bfc029d6-3ec0-11ef-b9c4-5b09e08d4792", 00:08:06.411 "is_configured": true, 00:08:06.411 "data_offset": 2048, 00:08:06.411 "data_size": 63488 00:08:06.411 } 00:08:06.411 ] 00:08:06.411 }' 00:08:06.411 13:31:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:06.411 13:31:45 -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 13:31:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:06.671 13:31:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:06.671 13:31:45 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.671 13:31:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:06.929 [2024-07-10 13:31:46.231093] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.929 13:31:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:07.187 13:31:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:07.187 13:31:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.187 13:31:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:07.446 [2024-07-10 13:31:46.587834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:07.446 [2024-07-10 13:31:46.587852] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.446 [2024-07-10 13:31:46.587860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.446 [2024-07-10 13:31:46.592486] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.446 [2024-07-10 13:31:46.592500] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b467a00 name Existed_Raid, state offline 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:07.446 13:31:46 -- bdev/bdev_raid.sh@287 -- # killprocess 50812 00:08:07.446 13:31:46 -- common/autotest_common.sh@926 -- # '[' -z 50812 ']' 00:08:07.446 13:31:46 -- common/autotest_common.sh@930 -- # kill -0 50812 00:08:07.446 13:31:46 -- common/autotest_common.sh@931 -- # uname 00:08:07.446 13:31:46 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:07.446 13:31:46 -- common/autotest_common.sh@934 -- # ps -c -o command 50812 00:08:07.446 13:31:46 -- common/autotest_common.sh@934 -- # tail -1 00:08:07.446 13:31:46 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:07.446 13:31:46 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:07.446 killing process with pid 50812 00:08:07.446 13:31:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50812' 00:08:07.446 13:31:46 -- common/autotest_common.sh@945 -- # kill 50812 00:08:07.446 [2024-07-10 13:31:46.809033] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.446 [2024-07-10 13:31:46.809066] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.446 13:31:46 -- common/autotest_common.sh@950 -- # wait 50812 00:08:07.706 13:31:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:07.706 00:08:07.706 real 0m8.486s 00:08:07.706 user 0m14.678s 00:08:07.706 sys 0m1.538s 00:08:07.706 13:31:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.706 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.706 ************************************ 00:08:07.706 END TEST raid_state_function_test_sb 00:08:07.706 ************************************ 00:08:07.706 13:31:46 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:07.706 13:31:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:07.706 13:31:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.706 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.706 ************************************ 00:08:07.706 START TEST raid_superblock_test 00:08:07.706 ************************************ 00:08:07.706 13:31:47 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@357 -- # raid_pid=51048 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51048 /var/tmp/spdk-raid.sock 00:08:07.706 13:31:47 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:07.706 13:31:47 -- common/autotest_common.sh@819 -- # '[' -z 51048 ']' 00:08:07.706 13:31:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:07.706 13:31:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:07.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:07.706 13:31:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:07.706 13:31:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:07.706 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:07.706 [2024-07-10 13:31:47.020286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:07.706 [2024-07-10 13:31:47.020660] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:08.276 EAL: TSC is not safe to use in SMP mode 00:08:08.276 EAL: TSC is not invariant 00:08:08.276 [2024-07-10 13:31:47.456019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.276 [2024-07-10 13:31:47.547900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.276 [2024-07-10 13:31:47.548386] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.276 [2024-07-10 13:31:47.548395] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.846 13:31:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:08.846 13:31:47 -- common/autotest_common.sh@852 -- # return 0 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:08.846 13:31:47 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:08.846 malloc1 00:08:08.846 13:31:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.106 [2024-07-10 13:31:48.279589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.106 [2024-07-10 13:31:48.279632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.106 [2024-07-10 13:31:48.280124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e34780 00:08:09.107 [2024-07-10 13:31:48.280145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.107 [2024-07-10 13:31:48.280764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.107 [2024-07-10 13:31:48.280797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.107 pt1 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.107 13:31:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:09.366 malloc2 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.366 [2024-07-10 13:31:48.715699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.366 [2024-07-10 13:31:48.715755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.366 [2024-07-10 13:31:48.715777] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e34c80 00:08:09.366 [2024-07-10 13:31:48.715783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.366 [2024-07-10 13:31:48.716254] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.366 [2024-07-10 13:31:48.716283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.366 pt2 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:09.366 13:31:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:09.626 13:31:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.626 13:31:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.626 13:31:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.626 13:31:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:09.626 malloc3 00:08:09.626 13:31:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:09.885 [2024-07-10 13:31:49.051777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:09.885 [2024-07-10 13:31:49.051823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.885 [2024-07-10 13:31:49.051844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35180 00:08:09.885 [2024-07-10 13:31:49.051850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.885 [2024-07-10 13:31:49.052280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.885 [2024-07-10 13:31:49.052315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:09.885 pt3 00:08:09.885 13:31:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:09.885 13:31:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:09.885 13:31:49 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:09.885 [2024-07-10 13:31:49.211814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.885 [2024-07-10 13:31:49.212202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.885 [2024-07-10 13:31:49.212220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:09.885 [2024-07-10 13:31:49.212266] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e35400 00:08:09.885 [2024-07-10 13:31:49.212271] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.886 [2024-07-10 13:31:49.212297] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e97e20 00:08:09.886 [2024-07-10 13:31:49.212347] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e35400 00:08:09.886 [2024-07-10 13:31:49.212349] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e35400 00:08:09.886 [2024-07-10 13:31:49.212366] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.886 13:31:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.145 13:31:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:10.145 "name": "raid_bdev1", 00:08:10.145 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:10.145 "strip_size_kb": 0, 00:08:10.145 "state": "online", 00:08:10.145 "raid_level": "raid1", 00:08:10.145 "superblock": true, 00:08:10.145 "num_base_bdevs": 3, 00:08:10.145 "num_base_bdevs_discovered": 3, 00:08:10.145 "num_base_bdevs_operational": 3, 00:08:10.145 "base_bdevs_list": [ 00:08:10.145 { 00:08:10.145 "name": "pt1", 00:08:10.145 "uuid": "adcc11fb-6564-fd52-91ee-a850120e4878", 00:08:10.145 "is_configured": true, 00:08:10.145 "data_offset": 2048, 00:08:10.145 "data_size": 63488 00:08:10.145 }, 00:08:10.145 { 00:08:10.145 "name": "pt2", 00:08:10.145 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:10.145 "is_configured": true, 00:08:10.145 "data_offset": 2048, 00:08:10.145 "data_size": 63488 00:08:10.145 }, 00:08:10.145 { 00:08:10.145 "name": "pt3", 00:08:10.145 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:10.145 "is_configured": true, 00:08:10.145 "data_offset": 2048, 00:08:10.145 "data_size": 63488 00:08:10.145 } 00:08:10.145 ] 00:08:10.145 }' 00:08:10.145 13:31:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:10.145 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.405 13:31:49 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:10.405 13:31:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:10.665 [2024-07-10 13:31:49.859996] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.665 13:31:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 00:08:10.665 13:31:49 -- bdev/bdev_raid.sh@380 -- # '[' -z c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:08:10.665 13:31:49 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:10.929 [2024-07-10 13:31:50.052011] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.929 [2024-07-10 13:31:50.052031] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.929 [2024-07-10 13:31:50.052042] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.929 [2024-07-10 13:31:50.052053] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.929 [2024-07-10 13:31:50.052056] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e35400 name raid_bdev1, state offline 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:10.929 13:31:50 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:11.192 13:31:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.192 13:31:50 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:11.451 13:31:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.451 13:31:50 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:11.451 13:31:50 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:11.451 13:31:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:11.712 13:31:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:11.712 13:31:50 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:11.712 13:31:50 -- common/autotest_common.sh@640 -- # local es=0 00:08:11.712 13:31:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:11.712 13:31:50 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.712 13:31:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:11.712 13:31:50 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.712 13:31:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:11.712 13:31:50 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.712 13:31:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:11.712 13:31:50 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.712 13:31:50 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:11.712 13:31:50 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:11.971 [2024-07-10 13:31:51.148306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:11.971 [2024-07-10 13:31:51.149044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:11.971 [2024-07-10 13:31:51.149063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:11.971 [2024-07-10 13:31:51.149077] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:11.971 [2024-07-10 13:31:51.149115] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:11.971 [2024-07-10 13:31:51.149123] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:11.971 [2024-07-10 13:31:51.149130] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.971 [2024-07-10 13:31:51.149134] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e35180 name raid_bdev1, state configuring 00:08:11.971 request: 00:08:11.971 { 00:08:11.971 "name": "raid_bdev1", 00:08:11.971 "raid_level": "raid1", 00:08:11.971 "base_bdevs": [ 00:08:11.971 "malloc1", 00:08:11.971 "malloc2", 00:08:11.971 "malloc3" 00:08:11.971 ], 00:08:11.971 "superblock": false, 00:08:11.971 "method": "bdev_raid_create", 00:08:11.971 "req_id": 1 00:08:11.971 } 00:08:11.971 Got JSON-RPC error response 00:08:11.971 response: 00:08:11.971 { 00:08:11.971 "code": -17, 00:08:11.971 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:11.971 } 00:08:11.971 13:31:51 -- common/autotest_common.sh@643 -- # es=1 00:08:11.971 13:31:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:11.971 13:31:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:11.971 13:31:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:11.971 13:31:51 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.971 13:31:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.231 [2024-07-10 13:31:51.520402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.231 [2024-07-10 13:31:51.520468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.231 [2024-07-10 13:31:51.520501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e34c80 00:08:12.231 [2024-07-10 13:31:51.520508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.231 [2024-07-10 13:31:51.521323] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.231 [2024-07-10 13:31:51.521348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.231 [2024-07-10 13:31:51.521372] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:12.231 [2024-07-10 13:31:51.521383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.231 pt1 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.231 13:31:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.490 13:31:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:12.490 "name": "raid_bdev1", 00:08:12.490 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:12.490 "strip_size_kb": 0, 00:08:12.490 "state": "configuring", 00:08:12.490 "raid_level": "raid1", 00:08:12.490 "superblock": true, 00:08:12.490 "num_base_bdevs": 3, 00:08:12.490 "num_base_bdevs_discovered": 1, 00:08:12.491 "num_base_bdevs_operational": 3, 00:08:12.491 "base_bdevs_list": [ 00:08:12.491 { 00:08:12.491 "name": "pt1", 00:08:12.491 "uuid": "adcc11fb-6564-fd52-91ee-a850120e4878", 00:08:12.491 "is_configured": true, 00:08:12.491 "data_offset": 2048, 00:08:12.491 "data_size": 63488 00:08:12.491 }, 00:08:12.491 { 00:08:12.491 "name": null, 00:08:12.491 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:12.491 "is_configured": false, 00:08:12.491 "data_offset": 2048, 00:08:12.491 "data_size": 63488 00:08:12.491 }, 00:08:12.491 { 00:08:12.491 "name": null, 00:08:12.491 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:12.491 "is_configured": false, 00:08:12.491 "data_offset": 2048, 00:08:12.491 "data_size": 63488 00:08:12.491 } 00:08:12.491 ] 00:08:12.491 }' 00:08:12.491 13:31:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:12.491 13:31:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.750 13:31:51 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:08:12.750 13:31:51 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.010 [2024-07-10 13:31:52.168557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.010 [2024-07-10 13:31:52.168638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.010 [2024-07-10 13:31:52.168671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35680 00:08:13.010 [2024-07-10 13:31:52.168678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.010 [2024-07-10 13:31:52.168781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.010 [2024-07-10 13:31:52.168791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.010 [2024-07-10 13:31:52.168812] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:13.010 [2024-07-10 13:31:52.168819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.010 pt2 00:08:13.010 13:31:52 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:13.010 [2024-07-10 13:31:52.376587] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:13.271 "name": "raid_bdev1", 00:08:13.271 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:13.271 "strip_size_kb": 0, 00:08:13.271 "state": "configuring", 00:08:13.271 "raid_level": "raid1", 00:08:13.271 "superblock": true, 00:08:13.271 "num_base_bdevs": 3, 00:08:13.271 "num_base_bdevs_discovered": 1, 00:08:13.271 "num_base_bdevs_operational": 3, 00:08:13.271 "base_bdevs_list": [ 00:08:13.271 { 00:08:13.271 "name": "pt1", 00:08:13.271 "uuid": "adcc11fb-6564-fd52-91ee-a850120e4878", 00:08:13.271 "is_configured": true, 00:08:13.271 "data_offset": 2048, 00:08:13.271 "data_size": 63488 00:08:13.271 }, 00:08:13.271 { 00:08:13.271 "name": null, 00:08:13.271 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:13.271 "is_configured": false, 00:08:13.271 "data_offset": 2048, 00:08:13.271 "data_size": 63488 00:08:13.271 }, 00:08:13.271 { 00:08:13.271 "name": null, 00:08:13.271 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:13.271 "is_configured": false, 00:08:13.271 "data_offset": 2048, 00:08:13.271 "data_size": 63488 00:08:13.271 } 00:08:13.271 ] 00:08:13.271 }' 00:08:13.271 13:31:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:13.271 13:31:52 -- common/autotest_common.sh@10 -- # set +x 00:08:13.532 13:31:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:13.532 13:31:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:13.532 13:31:52 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.794 [2024-07-10 13:31:53.004720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.794 [2024-07-10 13:31:53.004759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.794 [2024-07-10 13:31:53.004781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35680 00:08:13.794 [2024-07-10 13:31:53.004803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.794 [2024-07-10 13:31:53.004876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.794 [2024-07-10 13:31:53.004883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.794 [2024-07-10 13:31:53.004898] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:13.794 [2024-07-10 13:31:53.004904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.794 pt2 00:08:13.794 13:31:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:13.794 13:31:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:13.794 13:31:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.055 [2024-07-10 13:31:53.196759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:14.055 [2024-07-10 13:31:53.196798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.055 [2024-07-10 13:31:53.196827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35400 00:08:14.055 [2024-07-10 13:31:53.196832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.055 [2024-07-10 13:31:53.196886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.055 [2024-07-10 13:31:53.196892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:14.055 [2024-07-10 13:31:53.196904] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:14.055 [2024-07-10 13:31:53.196908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:14.055 [2024-07-10 13:31:53.196926] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e34780 00:08:14.055 [2024-07-10 13:31:53.196929] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.055 [2024-07-10 13:31:53.196944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e97e20 00:08:14.055 [2024-07-10 13:31:53.196981] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e34780 00:08:14.055 [2024-07-10 13:31:53.196984] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e34780 00:08:14.055 [2024-07-10 13:31:53.196999] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.055 pt3 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:14.055 "name": "raid_bdev1", 00:08:14.055 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:14.055 "strip_size_kb": 0, 00:08:14.055 "state": "online", 00:08:14.055 "raid_level": "raid1", 00:08:14.055 "superblock": true, 00:08:14.055 "num_base_bdevs": 3, 00:08:14.055 "num_base_bdevs_discovered": 3, 00:08:14.055 "num_base_bdevs_operational": 3, 00:08:14.055 "base_bdevs_list": [ 00:08:14.055 { 00:08:14.055 "name": "pt1", 00:08:14.055 "uuid": "adcc11fb-6564-fd52-91ee-a850120e4878", 00:08:14.055 "is_configured": true, 00:08:14.055 "data_offset": 2048, 00:08:14.055 "data_size": 63488 00:08:14.055 }, 00:08:14.055 { 00:08:14.055 "name": "pt2", 00:08:14.055 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:14.055 "is_configured": true, 00:08:14.055 "data_offset": 2048, 00:08:14.055 "data_size": 63488 00:08:14.055 }, 00:08:14.055 { 00:08:14.055 "name": "pt3", 00:08:14.055 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:14.055 "is_configured": true, 00:08:14.055 "data_offset": 2048, 00:08:14.055 "data_size": 63488 00:08:14.055 } 00:08:14.055 ] 00:08:14.055 }' 00:08:14.055 13:31:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:14.055 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.314 13:31:53 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:14.314 13:31:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:14.574 [2024-07-10 13:31:53.828926] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.574 13:31:53 -- bdev/bdev_raid.sh@430 -- # '[' c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 '!=' c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:08:14.574 13:31:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:08:14.574 13:31:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:14.574 13:31:53 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:14.574 13:31:53 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:14.832 [2024-07-10 13:31:54.016943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.832 13:31:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.091 13:31:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:15.091 "name": "raid_bdev1", 00:08:15.091 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:15.091 "strip_size_kb": 0, 00:08:15.091 "state": "online", 00:08:15.091 "raid_level": "raid1", 00:08:15.091 "superblock": true, 00:08:15.091 "num_base_bdevs": 3, 00:08:15.091 "num_base_bdevs_discovered": 2, 00:08:15.091 "num_base_bdevs_operational": 2, 00:08:15.091 "base_bdevs_list": [ 00:08:15.091 { 00:08:15.091 "name": null, 00:08:15.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.091 "is_configured": false, 00:08:15.091 "data_offset": 2048, 00:08:15.091 "data_size": 63488 00:08:15.091 }, 00:08:15.091 { 00:08:15.091 "name": "pt2", 00:08:15.091 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:15.091 "is_configured": true, 00:08:15.091 "data_offset": 2048, 00:08:15.091 "data_size": 63488 00:08:15.091 }, 00:08:15.091 { 00:08:15.091 "name": "pt3", 00:08:15.091 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:15.091 "is_configured": true, 00:08:15.091 "data_offset": 2048, 00:08:15.091 "data_size": 63488 00:08:15.091 } 00:08:15.091 ] 00:08:15.091 }' 00:08:15.091 13:31:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:15.091 13:31:54 -- common/autotest_common.sh@10 -- # set +x 00:08:15.351 13:31:54 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:15.351 [2024-07-10 13:31:54.665106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.351 [2024-07-10 13:31:54.665127] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.351 [2024-07-10 13:31:54.665136] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.351 [2024-07-10 13:31:54.665145] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.351 [2024-07-10 13:31:54.665149] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e34780 name raid_bdev1, state offline 00:08:15.351 13:31:54 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.351 13:31:54 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:08:15.611 13:31:54 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:08:15.611 13:31:54 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:08:15.611 13:31:54 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:08:15.611 13:31:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:15.611 13:31:54 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:15.870 13:31:55 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:15.870 13:31:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:15.870 13:31:55 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.130 [2024-07-10 13:31:55.405281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.130 [2024-07-10 13:31:55.405324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.130 [2024-07-10 13:31:55.405345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35400 00:08:16.130 [2024-07-10 13:31:55.405351] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.130 [2024-07-10 13:31:55.405863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.130 [2024-07-10 13:31:55.405889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.130 [2024-07-10 13:31:55.405907] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:16.130 [2024-07-10 13:31:55.405915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.130 pt2 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.130 13:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.450 13:31:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:16.450 "name": "raid_bdev1", 00:08:16.450 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:16.450 "strip_size_kb": 0, 00:08:16.450 "state": "configuring", 00:08:16.450 "raid_level": "raid1", 00:08:16.450 "superblock": true, 00:08:16.450 "num_base_bdevs": 3, 00:08:16.450 "num_base_bdevs_discovered": 1, 00:08:16.450 "num_base_bdevs_operational": 2, 00:08:16.450 "base_bdevs_list": [ 00:08:16.450 { 00:08:16.450 "name": null, 00:08:16.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.450 "is_configured": false, 00:08:16.450 "data_offset": 2048, 00:08:16.450 "data_size": 63488 00:08:16.450 }, 00:08:16.450 { 00:08:16.450 "name": "pt2", 00:08:16.450 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:16.450 "is_configured": true, 00:08:16.450 "data_offset": 2048, 00:08:16.450 "data_size": 63488 00:08:16.450 }, 00:08:16.450 { 00:08:16.450 "name": null, 00:08:16.450 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:16.450 "is_configured": false, 00:08:16.450 "data_offset": 2048, 00:08:16.450 "data_size": 63488 00:08:16.450 } 00:08:16.450 ] 00:08:16.450 }' 00:08:16.450 13:31:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:16.450 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.710 13:31:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:08:16.710 13:31:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:16.710 13:31:55 -- bdev/bdev_raid.sh@462 -- # i=2 00:08:16.710 13:31:55 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:16.710 [2024-07-10 13:31:56.065423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:16.710 [2024-07-10 13:31:56.065482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.710 [2024-07-10 13:31:56.065503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e34780 00:08:16.710 [2024-07-10 13:31:56.065508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.710 [2024-07-10 13:31:56.065574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.710 [2024-07-10 13:31:56.065581] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:16.710 [2024-07-10 13:31:56.065604] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:16.710 [2024-07-10 13:31:56.065609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:16.710 [2024-07-10 13:31:56.065626] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e35180 00:08:16.710 [2024-07-10 13:31:56.065628] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:16.710 [2024-07-10 13:31:56.065642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e97e20 00:08:16.710 [2024-07-10 13:31:56.065672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e35180 00:08:16.710 [2024-07-10 13:31:56.065674] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e35180 00:08:16.710 [2024-07-10 13:31:56.065689] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.710 pt3 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:16.969 "name": "raid_bdev1", 00:08:16.969 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:16.969 "strip_size_kb": 0, 00:08:16.969 "state": "online", 00:08:16.969 "raid_level": "raid1", 00:08:16.969 "superblock": true, 00:08:16.969 "num_base_bdevs": 3, 00:08:16.969 "num_base_bdevs_discovered": 2, 00:08:16.969 "num_base_bdevs_operational": 2, 00:08:16.969 "base_bdevs_list": [ 00:08:16.969 { 00:08:16.969 "name": null, 00:08:16.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.969 "is_configured": false, 00:08:16.969 "data_offset": 2048, 00:08:16.969 "data_size": 63488 00:08:16.969 }, 00:08:16.969 { 00:08:16.969 "name": "pt2", 00:08:16.969 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:16.969 "is_configured": true, 00:08:16.969 "data_offset": 2048, 00:08:16.969 "data_size": 63488 00:08:16.969 }, 00:08:16.969 { 00:08:16.969 "name": "pt3", 00:08:16.969 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:16.969 "is_configured": true, 00:08:16.969 "data_offset": 2048, 00:08:16.969 "data_size": 63488 00:08:16.969 } 00:08:16.969 ] 00:08:16.969 }' 00:08:16.969 13:31:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:16.969 13:31:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.228 13:31:56 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:08:17.228 13:31:56 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:17.487 [2024-07-10 13:31:56.713556] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:17.487 [2024-07-10 13:31:56.713575] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.487 [2024-07-10 13:31:56.713585] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.487 [2024-07-10 13:31:56.713592] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.487 [2024-07-10 13:31:56.713595] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e35180 name raid_bdev1, state offline 00:08:17.487 13:31:56 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.487 13:31:56 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:08:17.746 13:31:56 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:08:17.746 13:31:56 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:08:17.746 13:31:56 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:17.746 [2024-07-10 13:31:57.089640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:17.746 [2024-07-10 13:31:57.089697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.746 [2024-07-10 13:31:57.089717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35680 00:08:17.746 [2024-07-10 13:31:57.089722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.746 [2024-07-10 13:31:57.090203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.746 [2024-07-10 13:31:57.090235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:17.746 [2024-07-10 13:31:57.090252] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:17.746 [2024-07-10 13:31:57.090260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:17.746 pt1 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.746 13:31:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.005 13:31:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:18.005 "name": "raid_bdev1", 00:08:18.005 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:18.005 "strip_size_kb": 0, 00:08:18.005 "state": "configuring", 00:08:18.005 "raid_level": "raid1", 00:08:18.005 "superblock": true, 00:08:18.005 "num_base_bdevs": 3, 00:08:18.005 "num_base_bdevs_discovered": 1, 00:08:18.005 "num_base_bdevs_operational": 3, 00:08:18.005 "base_bdevs_list": [ 00:08:18.005 { 00:08:18.005 "name": "pt1", 00:08:18.005 "uuid": "adcc11fb-6564-fd52-91ee-a850120e4878", 00:08:18.005 "is_configured": true, 00:08:18.005 "data_offset": 2048, 00:08:18.005 "data_size": 63488 00:08:18.005 }, 00:08:18.005 { 00:08:18.005 "name": null, 00:08:18.005 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:18.005 "is_configured": false, 00:08:18.005 "data_offset": 2048, 00:08:18.005 "data_size": 63488 00:08:18.005 }, 00:08:18.005 { 00:08:18.005 "name": null, 00:08:18.005 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:18.005 "is_configured": false, 00:08:18.005 "data_offset": 2048, 00:08:18.005 "data_size": 63488 00:08:18.005 } 00:08:18.005 ] 00:08:18.005 }' 00:08:18.005 13:31:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:18.005 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:08:18.263 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:08:18.263 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:18.263 13:31:57 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:18.522 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:08:18.522 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:18.522 13:31:57 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:18.781 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:08:18.781 13:31:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:18.781 13:31:57 -- bdev/bdev_raid.sh@489 -- # i=2 00:08:18.781 13:31:57 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:18.781 [2024-07-10 13:31:58.121851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:18.781 [2024-07-10 13:31:58.121890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.781 [2024-07-10 13:31:58.121928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e34780 00:08:18.781 [2024-07-10 13:31:58.121933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.781 [2024-07-10 13:31:58.121997] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.781 [2024-07-10 13:31:58.122003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:18.781 [2024-07-10 13:31:58.122016] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:18.781 [2024-07-10 13:31:58.122019] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:18.781 [2024-07-10 13:31:58.122085] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.781 [2024-07-10 13:31:58.122089] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e34c80 name raid_bdev1, state configuring 00:08:18.781 [2024-07-10 13:31:58.122099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:18.781 pt3 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.781 13:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.040 13:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:19.040 "name": "raid_bdev1", 00:08:19.040 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:19.040 "strip_size_kb": 0, 00:08:19.040 "state": "configuring", 00:08:19.040 "raid_level": "raid1", 00:08:19.040 "superblock": true, 00:08:19.040 "num_base_bdevs": 3, 00:08:19.040 "num_base_bdevs_discovered": 1, 00:08:19.040 "num_base_bdevs_operational": 2, 00:08:19.040 "base_bdevs_list": [ 00:08:19.040 { 00:08:19.040 "name": null, 00:08:19.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.040 "is_configured": false, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "name": null, 00:08:19.040 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:19.040 "is_configured": false, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "name": "pt3", 00:08:19.040 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:19.040 "is_configured": true, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 } 00:08:19.040 ] 00:08:19.040 }' 00:08:19.040 13:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:19.040 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:19.299 13:31:58 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:08:19.299 13:31:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:08:19.299 13:31:58 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.559 [2024-07-10 13:31:58.777991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.559 [2024-07-10 13:31:58.778031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.559 [2024-07-10 13:31:58.778047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e35400 00:08:19.559 [2024-07-10 13:31:58.778053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.559 [2024-07-10 13:31:58.778133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.559 [2024-07-10 13:31:58.778145] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.559 [2024-07-10 13:31:58.778158] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:19.559 [2024-07-10 13:31:58.778163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.559 [2024-07-10 13:31:58.778179] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e34c80 00:08:19.559 [2024-07-10 13:31:58.778182] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.559 [2024-07-10 13:31:58.778197] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e97e20 00:08:19.559 [2024-07-10 13:31:58.778232] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e34c80 00:08:19.559 [2024-07-10 13:31:58.778235] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e34c80 00:08:19.559 [2024-07-10 13:31:58.778249] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.559 pt2 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.559 13:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.818 13:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:19.818 "name": "raid_bdev1", 00:08:19.818 "uuid": "c2a6a262-3ec0-11ef-b9c4-5b09e08d4792", 00:08:19.818 "strip_size_kb": 0, 00:08:19.818 "state": "online", 00:08:19.818 "raid_level": "raid1", 00:08:19.818 "superblock": true, 00:08:19.818 "num_base_bdevs": 3, 00:08:19.818 "num_base_bdevs_discovered": 2, 00:08:19.818 "num_base_bdevs_operational": 2, 00:08:19.818 "base_bdevs_list": [ 00:08:19.818 { 00:08:19.818 "name": null, 00:08:19.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.818 "is_configured": false, 00:08:19.818 "data_offset": 2048, 00:08:19.818 "data_size": 63488 00:08:19.818 }, 00:08:19.818 { 00:08:19.818 "name": "pt2", 00:08:19.818 "uuid": "1927751a-920e-cc5b-bf28-ab1ac7cb5289", 00:08:19.818 "is_configured": true, 00:08:19.818 "data_offset": 2048, 00:08:19.818 "data_size": 63488 00:08:19.818 }, 00:08:19.818 { 00:08:19.818 "name": "pt3", 00:08:19.818 "uuid": "c451275d-4051-7e50-a0bc-342b5962d431", 00:08:19.818 "is_configured": true, 00:08:19.818 "data_offset": 2048, 00:08:19.818 "data_size": 63488 00:08:19.818 } 00:08:19.818 ] 00:08:19.818 }' 00:08:19.818 13:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:19.818 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:20.077 13:31:59 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:20.077 13:31:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:08:20.077 [2024-07-10 13:31:59.434142] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@506 -- # '[' c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 '!=' c2a6a262-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@511 -- # killprocess 51048 00:08:20.336 13:31:59 -- common/autotest_common.sh@926 -- # '[' -z 51048 ']' 00:08:20.336 13:31:59 -- common/autotest_common.sh@930 -- # kill -0 51048 00:08:20.336 13:31:59 -- common/autotest_common.sh@931 -- # uname 00:08:20.336 13:31:59 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:20.336 13:31:59 -- common/autotest_common.sh@934 -- # tail -1 00:08:20.336 13:31:59 -- common/autotest_common.sh@934 -- # ps -c -o command 51048 00:08:20.336 13:31:59 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:20.336 13:31:59 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:20.336 killing process with pid 51048 00:08:20.336 13:31:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51048' 00:08:20.336 13:31:59 -- common/autotest_common.sh@945 -- # kill 51048 00:08:20.336 [2024-07-10 13:31:59.468450] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.336 [2024-07-10 13:31:59.468465] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.336 [2024-07-10 13:31:59.468483] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.336 [2024-07-10 13:31:59.468487] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e34c80 name raid_bdev1, state offline 00:08:20.336 13:31:59 -- common/autotest_common.sh@950 -- # wait 51048 00:08:20.336 [2024-07-10 13:31:59.482520] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:20.336 00:08:20.336 real 0m12.622s 00:08:20.336 user 0m22.449s 00:08:20.336 sys 0m2.111s 00:08:20.336 13:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.336 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.336 ************************************ 00:08:20.336 END TEST raid_superblock_test 00:08:20.336 ************************************ 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:20.336 13:31:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:20.336 13:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.336 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.336 ************************************ 00:08:20.336 START TEST raid_state_function_test 00:08:20.336 ************************************ 00:08:20.336 13:31:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=51430 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51430' 00:08:20.336 Process raid pid: 51430 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:20.336 13:31:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51430 /var/tmp/spdk-raid.sock 00:08:20.336 13:31:59 -- common/autotest_common.sh@819 -- # '[' -z 51430 ']' 00:08:20.336 13:31:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:20.336 13:31:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:20.336 13:31:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:20.336 13:31:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:20.336 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.336 [2024-07-10 13:31:59.699897] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:20.336 [2024-07-10 13:31:59.700238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:20.904 EAL: TSC is not safe to use in SMP mode 00:08:20.904 EAL: TSC is not invariant 00:08:20.904 [2024-07-10 13:32:00.128566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.904 [2024-07-10 13:32:00.206799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.904 [2024-07-10 13:32:00.207277] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.904 [2024-07-10 13:32:00.207286] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.473 13:32:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:21.473 13:32:00 -- common/autotest_common.sh@852 -- # return 0 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:21.473 [2024-07-10 13:32:00.778412] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.473 [2024-07-10 13:32:00.778460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.473 [2024-07-10 13:32:00.778464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.473 [2024-07-10 13:32:00.778470] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.473 [2024-07-10 13:32:00.778473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.473 [2024-07-10 13:32:00.778478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.473 [2024-07-10 13:32:00.778481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:21.473 [2024-07-10 13:32:00.778486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.473 13:32:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.732 13:32:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:21.732 "name": "Existed_Raid", 00:08:21.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.732 "strip_size_kb": 64, 00:08:21.733 "state": "configuring", 00:08:21.733 "raid_level": "raid0", 00:08:21.733 "superblock": false, 00:08:21.733 "num_base_bdevs": 4, 00:08:21.733 "num_base_bdevs_discovered": 0, 00:08:21.733 "num_base_bdevs_operational": 4, 00:08:21.733 "base_bdevs_list": [ 00:08:21.733 { 00:08:21.733 "name": "BaseBdev1", 00:08:21.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.733 "is_configured": false, 00:08:21.733 "data_offset": 0, 00:08:21.733 "data_size": 0 00:08:21.733 }, 00:08:21.733 { 00:08:21.733 "name": "BaseBdev2", 00:08:21.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.733 "is_configured": false, 00:08:21.733 "data_offset": 0, 00:08:21.733 "data_size": 0 00:08:21.733 }, 00:08:21.733 { 00:08:21.733 "name": "BaseBdev3", 00:08:21.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.733 "is_configured": false, 00:08:21.733 "data_offset": 0, 00:08:21.733 "data_size": 0 00:08:21.733 }, 00:08:21.733 { 00:08:21.733 "name": "BaseBdev4", 00:08:21.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.733 "is_configured": false, 00:08:21.733 "data_offset": 0, 00:08:21.733 "data_size": 0 00:08:21.733 } 00:08:21.733 ] 00:08:21.733 }' 00:08:21.733 13:32:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:21.733 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:08:21.993 13:32:01 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:22.251 [2024-07-10 13:32:01.454532] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.251 [2024-07-10 13:32:01.454552] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a0d4500 name Existed_Raid, state configuring 00:08:22.251 13:32:01 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:22.251 [2024-07-10 13:32:01.614569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.251 [2024-07-10 13:32:01.614605] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.251 [2024-07-10 13:32:01.614608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.251 [2024-07-10 13:32:01.614614] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.251 [2024-07-10 13:32:01.614616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.251 [2024-07-10 13:32:01.614621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.251 [2024-07-10 13:32:01.614623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:22.252 [2024-07-10 13:32:01.614629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:22.511 13:32:01 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.511 [2024-07-10 13:32:01.803417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.511 BaseBdev1 00:08:22.511 13:32:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:22.511 13:32:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:22.511 13:32:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:22.511 13:32:01 -- common/autotest_common.sh@889 -- # local i 00:08:22.511 13:32:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:22.511 13:32:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:22.511 13:32:01 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:22.774 13:32:02 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.043 [ 00:08:23.043 { 00:08:23.043 "name": "BaseBdev1", 00:08:23.043 "aliases": [ 00:08:23.043 "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792" 00:08:23.043 ], 00:08:23.043 "product_name": "Malloc disk", 00:08:23.043 "block_size": 512, 00:08:23.043 "num_blocks": 65536, 00:08:23.043 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:23.043 "assigned_rate_limits": { 00:08:23.043 "rw_ios_per_sec": 0, 00:08:23.043 "rw_mbytes_per_sec": 0, 00:08:23.043 "r_mbytes_per_sec": 0, 00:08:23.043 "w_mbytes_per_sec": 0 00:08:23.043 }, 00:08:23.043 "claimed": true, 00:08:23.043 "claim_type": "exclusive_write", 00:08:23.043 "zoned": false, 00:08:23.043 "supported_io_types": { 00:08:23.043 "read": true, 00:08:23.043 "write": true, 00:08:23.043 "unmap": true, 00:08:23.043 "write_zeroes": true, 00:08:23.043 "flush": true, 00:08:23.043 "reset": true, 00:08:23.043 "compare": false, 00:08:23.043 "compare_and_write": false, 00:08:23.043 "abort": true, 00:08:23.043 "nvme_admin": false, 00:08:23.043 "nvme_io": false 00:08:23.043 }, 00:08:23.043 "memory_domains": [ 00:08:23.043 { 00:08:23.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.043 "dma_device_type": 2 00:08:23.043 } 00:08:23.043 ], 00:08:23.043 "driver_specific": {} 00:08:23.043 } 00:08:23.043 ] 00:08:23.043 13:32:02 -- common/autotest_common.sh@895 -- # return 0 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:23.043 13:32:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.044 13:32:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.044 13:32:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:23.044 "name": "Existed_Raid", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "strip_size_kb": 64, 00:08:23.044 "state": "configuring", 00:08:23.044 "raid_level": "raid0", 00:08:23.044 "superblock": false, 00:08:23.044 "num_base_bdevs": 4, 00:08:23.044 "num_base_bdevs_discovered": 1, 00:08:23.044 "num_base_bdevs_operational": 4, 00:08:23.044 "base_bdevs_list": [ 00:08:23.044 { 00:08:23.044 "name": "BaseBdev1", 00:08:23.044 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:23.044 "is_configured": true, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 65536 00:08:23.044 }, 00:08:23.044 { 00:08:23.044 "name": "BaseBdev2", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "is_configured": false, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 0 00:08:23.044 }, 00:08:23.044 { 00:08:23.044 "name": "BaseBdev3", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "is_configured": false, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 0 00:08:23.044 }, 00:08:23.044 { 00:08:23.044 "name": "BaseBdev4", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "is_configured": false, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 0 00:08:23.044 } 00:08:23.044 ] 00:08:23.044 }' 00:08:23.044 13:32:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:23.044 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:08:23.613 13:32:02 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:23.613 [2024-07-10 13:32:02.846797] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.613 [2024-07-10 13:32:02.846824] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a0d4500 name Existed_Raid, state configuring 00:08:23.613 13:32:02 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:23.613 13:32:02 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:23.872 [2024-07-10 13:32:03.038839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.872 [2024-07-10 13:32:03.039437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.872 [2024-07-10 13:32:03.039475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.872 [2024-07-10 13:32:03.039482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.872 [2024-07-10 13:32:03.039488] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.872 [2024-07-10 13:32:03.039490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:23.872 [2024-07-10 13:32:03.039495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.872 13:32:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.129 13:32:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:24.129 "name": "Existed_Raid", 00:08:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.129 "strip_size_kb": 64, 00:08:24.129 "state": "configuring", 00:08:24.129 "raid_level": "raid0", 00:08:24.129 "superblock": false, 00:08:24.129 "num_base_bdevs": 4, 00:08:24.129 "num_base_bdevs_discovered": 1, 00:08:24.129 "num_base_bdevs_operational": 4, 00:08:24.129 "base_bdevs_list": [ 00:08:24.129 { 00:08:24.129 "name": "BaseBdev1", 00:08:24.129 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:24.129 "is_configured": true, 00:08:24.129 "data_offset": 0, 00:08:24.129 "data_size": 65536 00:08:24.129 }, 00:08:24.129 { 00:08:24.129 "name": "BaseBdev2", 00:08:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.129 "is_configured": false, 00:08:24.129 "data_offset": 0, 00:08:24.129 "data_size": 0 00:08:24.129 }, 00:08:24.129 { 00:08:24.129 "name": "BaseBdev3", 00:08:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.129 "is_configured": false, 00:08:24.129 "data_offset": 0, 00:08:24.129 "data_size": 0 00:08:24.129 }, 00:08:24.129 { 00:08:24.129 "name": "BaseBdev4", 00:08:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.129 "is_configured": false, 00:08:24.129 "data_offset": 0, 00:08:24.129 "data_size": 0 00:08:24.129 } 00:08:24.129 ] 00:08:24.129 }' 00:08:24.129 13:32:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:24.129 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 13:32:03 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.388 [2024-07-10 13:32:03.711068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.388 BaseBdev2 00:08:24.388 13:32:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:24.388 13:32:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:24.388 13:32:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:24.388 13:32:03 -- common/autotest_common.sh@889 -- # local i 00:08:24.388 13:32:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:24.388 13:32:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:24.388 13:32:03 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:24.647 13:32:03 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.906 [ 00:08:24.906 { 00:08:24.906 "name": "BaseBdev2", 00:08:24.906 "aliases": [ 00:08:24.906 "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792" 00:08:24.906 ], 00:08:24.906 "product_name": "Malloc disk", 00:08:24.906 "block_size": 512, 00:08:24.906 "num_blocks": 65536, 00:08:24.906 "uuid": "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792", 00:08:24.906 "assigned_rate_limits": { 00:08:24.906 "rw_ios_per_sec": 0, 00:08:24.906 "rw_mbytes_per_sec": 0, 00:08:24.906 "r_mbytes_per_sec": 0, 00:08:24.906 "w_mbytes_per_sec": 0 00:08:24.906 }, 00:08:24.906 "claimed": true, 00:08:24.906 "claim_type": "exclusive_write", 00:08:24.906 "zoned": false, 00:08:24.906 "supported_io_types": { 00:08:24.906 "read": true, 00:08:24.906 "write": true, 00:08:24.906 "unmap": true, 00:08:24.906 "write_zeroes": true, 00:08:24.906 "flush": true, 00:08:24.906 "reset": true, 00:08:24.906 "compare": false, 00:08:24.906 "compare_and_write": false, 00:08:24.906 "abort": true, 00:08:24.906 "nvme_admin": false, 00:08:24.906 "nvme_io": false 00:08:24.906 }, 00:08:24.906 "memory_domains": [ 00:08:24.906 { 00:08:24.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.906 "dma_device_type": 2 00:08:24.906 } 00:08:24.906 ], 00:08:24.906 "driver_specific": {} 00:08:24.906 } 00:08:24.906 ] 00:08:24.906 13:32:04 -- common/autotest_common.sh@895 -- # return 0 00:08:24.906 13:32:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:24.906 13:32:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:24.906 13:32:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:24.906 13:32:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.907 13:32:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.166 13:32:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:25.166 "name": "Existed_Raid", 00:08:25.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.166 "strip_size_kb": 64, 00:08:25.166 "state": "configuring", 00:08:25.166 "raid_level": "raid0", 00:08:25.166 "superblock": false, 00:08:25.166 "num_base_bdevs": 4, 00:08:25.166 "num_base_bdevs_discovered": 2, 00:08:25.166 "num_base_bdevs_operational": 4, 00:08:25.166 "base_bdevs_list": [ 00:08:25.166 { 00:08:25.166 "name": "BaseBdev1", 00:08:25.166 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:25.166 "is_configured": true, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 65536 00:08:25.166 }, 00:08:25.166 { 00:08:25.166 "name": "BaseBdev2", 00:08:25.166 "uuid": "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792", 00:08:25.166 "is_configured": true, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 65536 00:08:25.166 }, 00:08:25.166 { 00:08:25.166 "name": "BaseBdev3", 00:08:25.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.166 "is_configured": false, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 0 00:08:25.166 }, 00:08:25.166 { 00:08:25.166 "name": "BaseBdev4", 00:08:25.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.166 "is_configured": false, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 0 00:08:25.166 } 00:08:25.166 ] 00:08:25.166 }' 00:08:25.166 13:32:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:25.166 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.425 13:32:04 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.425 [2024-07-10 13:32:04.755225] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.425 BaseBdev3 00:08:25.425 13:32:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:25.425 13:32:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:25.425 13:32:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:25.425 13:32:04 -- common/autotest_common.sh@889 -- # local i 00:08:25.425 13:32:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:25.425 13:32:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:25.426 13:32:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:25.684 13:32:04 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.943 [ 00:08:25.943 { 00:08:25.943 "name": "BaseBdev3", 00:08:25.943 "aliases": [ 00:08:25.943 "cbea5cb8-3ec0-11ef-b9c4-5b09e08d4792" 00:08:25.943 ], 00:08:25.943 "product_name": "Malloc disk", 00:08:25.943 "block_size": 512, 00:08:25.943 "num_blocks": 65536, 00:08:25.943 "uuid": "cbea5cb8-3ec0-11ef-b9c4-5b09e08d4792", 00:08:25.943 "assigned_rate_limits": { 00:08:25.943 "rw_ios_per_sec": 0, 00:08:25.943 "rw_mbytes_per_sec": 0, 00:08:25.943 "r_mbytes_per_sec": 0, 00:08:25.943 "w_mbytes_per_sec": 0 00:08:25.943 }, 00:08:25.943 "claimed": true, 00:08:25.943 "claim_type": "exclusive_write", 00:08:25.943 "zoned": false, 00:08:25.943 "supported_io_types": { 00:08:25.943 "read": true, 00:08:25.943 "write": true, 00:08:25.943 "unmap": true, 00:08:25.943 "write_zeroes": true, 00:08:25.943 "flush": true, 00:08:25.943 "reset": true, 00:08:25.943 "compare": false, 00:08:25.943 "compare_and_write": false, 00:08:25.943 "abort": true, 00:08:25.943 "nvme_admin": false, 00:08:25.943 "nvme_io": false 00:08:25.943 }, 00:08:25.943 "memory_domains": [ 00:08:25.943 { 00:08:25.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.943 "dma_device_type": 2 00:08:25.943 } 00:08:25.943 ], 00:08:25.943 "driver_specific": {} 00:08:25.943 } 00:08:25.943 ] 00:08:25.943 13:32:05 -- common/autotest_common.sh@895 -- # return 0 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.943 13:32:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.202 13:32:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:26.202 "name": "Existed_Raid", 00:08:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.202 "strip_size_kb": 64, 00:08:26.202 "state": "configuring", 00:08:26.202 "raid_level": "raid0", 00:08:26.202 "superblock": false, 00:08:26.202 "num_base_bdevs": 4, 00:08:26.202 "num_base_bdevs_discovered": 3, 00:08:26.202 "num_base_bdevs_operational": 4, 00:08:26.202 "base_bdevs_list": [ 00:08:26.202 { 00:08:26.202 "name": "BaseBdev1", 00:08:26.202 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:26.202 "is_configured": true, 00:08:26.202 "data_offset": 0, 00:08:26.202 "data_size": 65536 00:08:26.202 }, 00:08:26.202 { 00:08:26.202 "name": "BaseBdev2", 00:08:26.202 "uuid": "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792", 00:08:26.202 "is_configured": true, 00:08:26.202 "data_offset": 0, 00:08:26.202 "data_size": 65536 00:08:26.202 }, 00:08:26.202 { 00:08:26.202 "name": "BaseBdev3", 00:08:26.202 "uuid": "cbea5cb8-3ec0-11ef-b9c4-5b09e08d4792", 00:08:26.202 "is_configured": true, 00:08:26.202 "data_offset": 0, 00:08:26.202 "data_size": 65536 00:08:26.202 }, 00:08:26.202 { 00:08:26.202 "name": "BaseBdev4", 00:08:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.202 "is_configured": false, 00:08:26.202 "data_offset": 0, 00:08:26.202 "data_size": 0 00:08:26.202 } 00:08:26.202 ] 00:08:26.202 }' 00:08:26.202 13:32:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:26.202 13:32:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.461 13:32:05 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:08:26.461 [2024-07-10 13:32:05.815439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:26.461 [2024-07-10 13:32:05.815460] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a0d4a00 00:08:26.461 [2024-07-10 13:32:05.815463] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:26.461 [2024-07-10 13:32:05.815484] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a137ec0 00:08:26.461 [2024-07-10 13:32:05.815555] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a0d4a00 00:08:26.461 [2024-07-10 13:32:05.815558] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a0d4a00 00:08:26.461 [2024-07-10 13:32:05.815580] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.461 BaseBdev4 00:08:26.720 13:32:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:08:26.720 13:32:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:08:26.720 13:32:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:26.720 13:32:05 -- common/autotest_common.sh@889 -- # local i 00:08:26.720 13:32:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:26.720 13:32:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:26.720 13:32:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:26.720 13:32:05 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:26.980 [ 00:08:26.980 { 00:08:26.980 "name": "BaseBdev4", 00:08:26.980 "aliases": [ 00:08:26.980 "cc8c2366-3ec0-11ef-b9c4-5b09e08d4792" 00:08:26.980 ], 00:08:26.980 "product_name": "Malloc disk", 00:08:26.980 "block_size": 512, 00:08:26.980 "num_blocks": 65536, 00:08:26.980 "uuid": "cc8c2366-3ec0-11ef-b9c4-5b09e08d4792", 00:08:26.980 "assigned_rate_limits": { 00:08:26.980 "rw_ios_per_sec": 0, 00:08:26.980 "rw_mbytes_per_sec": 0, 00:08:26.980 "r_mbytes_per_sec": 0, 00:08:26.980 "w_mbytes_per_sec": 0 00:08:26.980 }, 00:08:26.980 "claimed": true, 00:08:26.980 "claim_type": "exclusive_write", 00:08:26.980 "zoned": false, 00:08:26.980 "supported_io_types": { 00:08:26.980 "read": true, 00:08:26.980 "write": true, 00:08:26.980 "unmap": true, 00:08:26.980 "write_zeroes": true, 00:08:26.980 "flush": true, 00:08:26.980 "reset": true, 00:08:26.980 "compare": false, 00:08:26.980 "compare_and_write": false, 00:08:26.980 "abort": true, 00:08:26.980 "nvme_admin": false, 00:08:26.980 "nvme_io": false 00:08:26.980 }, 00:08:26.980 "memory_domains": [ 00:08:26.980 { 00:08:26.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.980 "dma_device_type": 2 00:08:26.980 } 00:08:26.980 ], 00:08:26.980 "driver_specific": {} 00:08:26.980 } 00:08:26.980 ] 00:08:26.980 13:32:06 -- common/autotest_common.sh@895 -- # return 0 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.980 13:32:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.239 13:32:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:27.239 "name": "Existed_Raid", 00:08:27.239 "uuid": "cc8c26af-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.239 "strip_size_kb": 64, 00:08:27.239 "state": "online", 00:08:27.239 "raid_level": "raid0", 00:08:27.239 "superblock": false, 00:08:27.239 "num_base_bdevs": 4, 00:08:27.239 "num_base_bdevs_discovered": 4, 00:08:27.239 "num_base_bdevs_operational": 4, 00:08:27.239 "base_bdevs_list": [ 00:08:27.239 { 00:08:27.239 "name": "BaseBdev1", 00:08:27.239 "uuid": "ca27d6ce-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.239 "is_configured": true, 00:08:27.239 "data_offset": 0, 00:08:27.239 "data_size": 65536 00:08:27.239 }, 00:08:27.239 { 00:08:27.239 "name": "BaseBdev2", 00:08:27.239 "uuid": "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.239 "is_configured": true, 00:08:27.239 "data_offset": 0, 00:08:27.239 "data_size": 65536 00:08:27.239 }, 00:08:27.239 { 00:08:27.239 "name": "BaseBdev3", 00:08:27.239 "uuid": "cbea5cb8-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.239 "is_configured": true, 00:08:27.239 "data_offset": 0, 00:08:27.239 "data_size": 65536 00:08:27.239 }, 00:08:27.239 { 00:08:27.239 "name": "BaseBdev4", 00:08:27.239 "uuid": "cc8c2366-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.239 "is_configured": true, 00:08:27.239 "data_offset": 0, 00:08:27.239 "data_size": 65536 00:08:27.239 } 00:08:27.239 ] 00:08:27.239 }' 00:08:27.239 13:32:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:27.239 13:32:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:27.497 [2024-07-10 13:32:06.819558] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.497 [2024-07-10 13:32:06.819578] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.497 [2024-07-10 13:32:06.819587] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.497 13:32:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.756 13:32:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:27.757 "name": "Existed_Raid", 00:08:27.757 "uuid": "cc8c26af-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.757 "strip_size_kb": 64, 00:08:27.757 "state": "offline", 00:08:27.757 "raid_level": "raid0", 00:08:27.757 "superblock": false, 00:08:27.757 "num_base_bdevs": 4, 00:08:27.757 "num_base_bdevs_discovered": 3, 00:08:27.757 "num_base_bdevs_operational": 3, 00:08:27.757 "base_bdevs_list": [ 00:08:27.757 { 00:08:27.757 "name": null, 00:08:27.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.757 "is_configured": false, 00:08:27.757 "data_offset": 0, 00:08:27.757 "data_size": 65536 00:08:27.757 }, 00:08:27.757 { 00:08:27.757 "name": "BaseBdev2", 00:08:27.757 "uuid": "cb4b081a-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.757 "is_configured": true, 00:08:27.757 "data_offset": 0, 00:08:27.757 "data_size": 65536 00:08:27.757 }, 00:08:27.757 { 00:08:27.757 "name": "BaseBdev3", 00:08:27.757 "uuid": "cbea5cb8-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.757 "is_configured": true, 00:08:27.757 "data_offset": 0, 00:08:27.757 "data_size": 65536 00:08:27.757 }, 00:08:27.757 { 00:08:27.757 "name": "BaseBdev4", 00:08:27.757 "uuid": "cc8c2366-3ec0-11ef-b9c4-5b09e08d4792", 00:08:27.757 "is_configured": true, 00:08:27.757 "data_offset": 0, 00:08:27.757 "data_size": 65536 00:08:27.757 } 00:08:27.757 ] 00:08:27.757 }' 00:08:27.757 13:32:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:27.757 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:28.018 13:32:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:28.018 13:32:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:28.018 13:32:07 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.018 13:32:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:28.292 13:32:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:28.292 13:32:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.292 13:32:07 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:28.292 [2024-07-10 13:32:07.640391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.292 13:32:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:28.292 13:32:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:28.550 13:32:07 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.550 13:32:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:28.550 13:32:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:28.550 13:32:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.550 13:32:07 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:28.807 [2024-07-10 13:32:08.021076] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.807 13:32:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:28.807 13:32:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:28.807 13:32:08 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.807 13:32:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:29.065 13:32:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:29.065 13:32:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.066 13:32:08 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:08:29.066 [2024-07-10 13:32:08.389762] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:29.066 [2024-07-10 13:32:08.389786] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a0d4a00 name Existed_Raid, state offline 00:08:29.066 13:32:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:29.066 13:32:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:29.066 13:32:08 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.066 13:32:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.358 13:32:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:29.358 13:32:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:29.358 13:32:08 -- bdev/bdev_raid.sh@287 -- # killprocess 51430 00:08:29.358 13:32:08 -- common/autotest_common.sh@926 -- # '[' -z 51430 ']' 00:08:29.358 13:32:08 -- common/autotest_common.sh@930 -- # kill -0 51430 00:08:29.358 13:32:08 -- common/autotest_common.sh@931 -- # uname 00:08:29.358 13:32:08 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:29.358 13:32:08 -- common/autotest_common.sh@934 -- # ps -c -o command 51430 00:08:29.358 13:32:08 -- common/autotest_common.sh@934 -- # tail -1 00:08:29.358 13:32:08 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:29.358 13:32:08 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:29.358 killing process with pid 51430 00:08:29.358 13:32:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51430' 00:08:29.358 13:32:08 -- common/autotest_common.sh@945 -- # kill 51430 00:08:29.359 [2024-07-10 13:32:08.610011] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.359 [2024-07-10 13:32:08.610045] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.359 13:32:08 -- common/autotest_common.sh@950 -- # wait 51430 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:29.617 00:08:29.617 real 0m9.076s 00:08:29.617 user 0m15.656s 00:08:29.617 sys 0m1.803s 00:08:29.617 13:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.617 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 ************************************ 00:08:29.617 END TEST raid_state_function_test 00:08:29.617 ************************************ 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:08:29.617 13:32:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:29.617 13:32:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.617 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 ************************************ 00:08:29.617 START TEST raid_state_function_test_sb 00:08:29.617 ************************************ 00:08:29.617 13:32:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=51700 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51700' 00:08:29.617 Process raid pid: 51700 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:29.617 13:32:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51700 /var/tmp/spdk-raid.sock 00:08:29.617 13:32:08 -- common/autotest_common.sh@819 -- # '[' -z 51700 ']' 00:08:29.617 13:32:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:29.617 13:32:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:29.617 13:32:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:29.617 13:32:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.617 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [2024-07-10 13:32:08.829757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:29.617 [2024-07-10 13:32:08.830028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:30.184 EAL: TSC is not safe to use in SMP mode 00:08:30.184 EAL: TSC is not invariant 00:08:30.184 [2024-07-10 13:32:09.263794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.184 [2024-07-10 13:32:09.342964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.184 [2024-07-10 13:32:09.343435] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.184 [2024-07-10 13:32:09.343445] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.442 13:32:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:30.442 13:32:09 -- common/autotest_common.sh@852 -- # return 0 00:08:30.442 13:32:09 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:30.700 [2024-07-10 13:32:09.902436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.700 [2024-07-10 13:32:09.902484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.700 [2024-07-10 13:32:09.902488] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.700 [2024-07-10 13:32:09.902495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.700 [2024-07-10 13:32:09.902497] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.700 [2024-07-10 13:32:09.902503] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.700 [2024-07-10 13:32:09.902506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:30.700 [2024-07-10 13:32:09.902511] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.700 13:32:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.959 13:32:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:30.959 "name": "Existed_Raid", 00:08:30.959 "uuid": "cefbc5b0-3ec0-11ef-b9c4-5b09e08d4792", 00:08:30.959 "strip_size_kb": 64, 00:08:30.959 "state": "configuring", 00:08:30.959 "raid_level": "raid0", 00:08:30.959 "superblock": true, 00:08:30.959 "num_base_bdevs": 4, 00:08:30.959 "num_base_bdevs_discovered": 0, 00:08:30.959 "num_base_bdevs_operational": 4, 00:08:30.959 "base_bdevs_list": [ 00:08:30.959 { 00:08:30.959 "name": "BaseBdev1", 00:08:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.959 "is_configured": false, 00:08:30.959 "data_offset": 0, 00:08:30.959 "data_size": 0 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "name": "BaseBdev2", 00:08:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.959 "is_configured": false, 00:08:30.959 "data_offset": 0, 00:08:30.959 "data_size": 0 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "name": "BaseBdev3", 00:08:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.959 "is_configured": false, 00:08:30.959 "data_offset": 0, 00:08:30.959 "data_size": 0 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "name": "BaseBdev4", 00:08:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.959 "is_configured": false, 00:08:30.959 "data_offset": 0, 00:08:30.959 "data_size": 0 00:08:30.959 } 00:08:30.959 ] 00:08:30.959 }' 00:08:30.959 13:32:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:30.959 13:32:10 -- common/autotest_common.sh@10 -- # set +x 00:08:31.217 13:32:10 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:31.217 [2024-07-10 13:32:10.578515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.217 [2024-07-10 13:32:10.578551] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a977500 name Existed_Raid, state configuring 00:08:31.477 13:32:10 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:31.477 [2024-07-10 13:32:10.770554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.477 [2024-07-10 13:32:10.770594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.477 [2024-07-10 13:32:10.770597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.477 [2024-07-10 13:32:10.770602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.477 [2024-07-10 13:32:10.770605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.477 [2024-07-10 13:32:10.770610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.477 [2024-07-10 13:32:10.770612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:31.477 [2024-07-10 13:32:10.770617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:31.477 13:32:10 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:31.735 [2024-07-10 13:32:10.971415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.735 BaseBdev1 00:08:31.735 13:32:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:31.735 13:32:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:31.735 13:32:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:31.735 13:32:10 -- common/autotest_common.sh@889 -- # local i 00:08:31.735 13:32:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:31.735 13:32:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:31.735 13:32:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:31.993 13:32:11 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.993 [ 00:08:31.993 { 00:08:31.993 "name": "BaseBdev1", 00:08:31.993 "aliases": [ 00:08:31.993 "cf9ec27e-3ec0-11ef-b9c4-5b09e08d4792" 00:08:31.993 ], 00:08:31.993 "product_name": "Malloc disk", 00:08:31.993 "block_size": 512, 00:08:31.993 "num_blocks": 65536, 00:08:31.993 "uuid": "cf9ec27e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:31.993 "assigned_rate_limits": { 00:08:31.993 "rw_ios_per_sec": 0, 00:08:31.993 "rw_mbytes_per_sec": 0, 00:08:31.993 "r_mbytes_per_sec": 0, 00:08:31.993 "w_mbytes_per_sec": 0 00:08:31.993 }, 00:08:31.993 "claimed": true, 00:08:31.993 "claim_type": "exclusive_write", 00:08:31.993 "zoned": false, 00:08:31.993 "supported_io_types": { 00:08:31.993 "read": true, 00:08:31.993 "write": true, 00:08:31.993 "unmap": true, 00:08:31.993 "write_zeroes": true, 00:08:31.993 "flush": true, 00:08:31.993 "reset": true, 00:08:31.993 "compare": false, 00:08:31.993 "compare_and_write": false, 00:08:31.993 "abort": true, 00:08:31.993 "nvme_admin": false, 00:08:31.993 "nvme_io": false 00:08:31.993 }, 00:08:31.993 "memory_domains": [ 00:08:31.993 { 00:08:31.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.993 "dma_device_type": 2 00:08:31.993 } 00:08:31.993 ], 00:08:31.993 "driver_specific": {} 00:08:31.993 } 00:08:31.993 ] 00:08:32.252 13:32:11 -- common/autotest_common.sh@895 -- # return 0 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:32.252 13:32:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:32.253 13:32:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:32.253 13:32:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.253 13:32:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.253 13:32:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:32.253 "name": "Existed_Raid", 00:08:32.253 "uuid": "cf803cb8-3ec0-11ef-b9c4-5b09e08d4792", 00:08:32.253 "strip_size_kb": 64, 00:08:32.253 "state": "configuring", 00:08:32.253 "raid_level": "raid0", 00:08:32.253 "superblock": true, 00:08:32.253 "num_base_bdevs": 4, 00:08:32.253 "num_base_bdevs_discovered": 1, 00:08:32.253 "num_base_bdevs_operational": 4, 00:08:32.253 "base_bdevs_list": [ 00:08:32.253 { 00:08:32.253 "name": "BaseBdev1", 00:08:32.253 "uuid": "cf9ec27e-3ec0-11ef-b9c4-5b09e08d4792", 00:08:32.253 "is_configured": true, 00:08:32.253 "data_offset": 2048, 00:08:32.253 "data_size": 63488 00:08:32.253 }, 00:08:32.253 { 00:08:32.253 "name": "BaseBdev2", 00:08:32.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.253 "is_configured": false, 00:08:32.253 "data_offset": 0, 00:08:32.253 "data_size": 0 00:08:32.253 }, 00:08:32.253 { 00:08:32.253 "name": "BaseBdev3", 00:08:32.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.253 "is_configured": false, 00:08:32.253 "data_offset": 0, 00:08:32.253 "data_size": 0 00:08:32.253 }, 00:08:32.253 { 00:08:32.253 "name": "BaseBdev4", 00:08:32.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.253 "is_configured": false, 00:08:32.253 "data_offset": 0, 00:08:32.253 "data_size": 0 00:08:32.253 } 00:08:32.253 ] 00:08:32.253 }' 00:08:32.253 13:32:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:32.253 13:32:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.512 13:32:11 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:32.771 [2024-07-10 13:32:12.010772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.771 [2024-07-10 13:32:12.010799] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a977500 name Existed_Raid, state configuring 00:08:32.771 13:32:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:32.771 13:32:12 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:33.030 13:32:12 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.030 BaseBdev1 00:08:33.289 13:32:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:33.289 13:32:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:33.289 13:32:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:33.289 13:32:12 -- common/autotest_common.sh@889 -- # local i 00:08:33.289 13:32:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:33.289 13:32:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:33.289 13:32:12 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:33.289 13:32:12 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.547 [ 00:08:33.547 { 00:08:33.547 "name": "BaseBdev1", 00:08:33.547 "aliases": [ 00:08:33.547 "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792" 00:08:33.547 ], 00:08:33.547 "product_name": "Malloc disk", 00:08:33.547 "block_size": 512, 00:08:33.547 "num_blocks": 65536, 00:08:33.547 "uuid": "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792", 00:08:33.547 "assigned_rate_limits": { 00:08:33.547 "rw_ios_per_sec": 0, 00:08:33.547 "rw_mbytes_per_sec": 0, 00:08:33.547 "r_mbytes_per_sec": 0, 00:08:33.547 "w_mbytes_per_sec": 0 00:08:33.547 }, 00:08:33.547 "claimed": false, 00:08:33.547 "zoned": false, 00:08:33.547 "supported_io_types": { 00:08:33.547 "read": true, 00:08:33.547 "write": true, 00:08:33.547 "unmap": true, 00:08:33.547 "write_zeroes": true, 00:08:33.547 "flush": true, 00:08:33.547 "reset": true, 00:08:33.547 "compare": false, 00:08:33.547 "compare_and_write": false, 00:08:33.547 "abort": true, 00:08:33.547 "nvme_admin": false, 00:08:33.547 "nvme_io": false 00:08:33.547 }, 00:08:33.547 "memory_domains": [ 00:08:33.547 { 00:08:33.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.547 "dma_device_type": 2 00:08:33.547 } 00:08:33.547 ], 00:08:33.547 "driver_specific": {} 00:08:33.547 } 00:08:33.547 ] 00:08:33.547 13:32:12 -- common/autotest_common.sh@895 -- # return 0 00:08:33.547 13:32:12 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:33.815 [2024-07-10 13:32:12.959597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.815 [2024-07-10 13:32:12.959983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.815 [2024-07-10 13:32:12.960020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.815 [2024-07-10 13:32:12.960024] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.815 [2024-07-10 13:32:12.960030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.815 [2024-07-10 13:32:12.960033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:33.815 [2024-07-10 13:32:12.960054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:33.815 13:32:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:33.816 13:32:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.074 13:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.074 "name": "Existed_Raid", 00:08:34.074 "uuid": "d0ce4218-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.074 "strip_size_kb": 64, 00:08:34.074 "state": "configuring", 00:08:34.074 "raid_level": "raid0", 00:08:34.074 "superblock": true, 00:08:34.074 "num_base_bdevs": 4, 00:08:34.074 "num_base_bdevs_discovered": 1, 00:08:34.074 "num_base_bdevs_operational": 4, 00:08:34.074 "base_bdevs_list": [ 00:08:34.074 { 00:08:34.074 "name": "BaseBdev1", 00:08:34.074 "uuid": "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.074 "is_configured": true, 00:08:34.074 "data_offset": 2048, 00:08:34.074 "data_size": 63488 00:08:34.074 }, 00:08:34.074 { 00:08:34.074 "name": "BaseBdev2", 00:08:34.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.074 "is_configured": false, 00:08:34.074 "data_offset": 0, 00:08:34.074 "data_size": 0 00:08:34.074 }, 00:08:34.074 { 00:08:34.074 "name": "BaseBdev3", 00:08:34.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.074 "is_configured": false, 00:08:34.074 "data_offset": 0, 00:08:34.074 "data_size": 0 00:08:34.074 }, 00:08:34.074 { 00:08:34.074 "name": "BaseBdev4", 00:08:34.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.074 "is_configured": false, 00:08:34.074 "data_offset": 0, 00:08:34.074 "data_size": 0 00:08:34.074 } 00:08:34.074 ] 00:08:34.074 }' 00:08:34.074 13:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.074 13:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:34.337 13:32:13 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.337 [2024-07-10 13:32:13.627789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.337 BaseBdev2 00:08:34.337 13:32:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:34.337 13:32:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:34.337 13:32:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:34.337 13:32:13 -- common/autotest_common.sh@889 -- # local i 00:08:34.337 13:32:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:34.337 13:32:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:34.337 13:32:13 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:34.597 13:32:13 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.855 [ 00:08:34.856 { 00:08:34.856 "name": "BaseBdev2", 00:08:34.856 "aliases": [ 00:08:34.856 "d1343424-3ec0-11ef-b9c4-5b09e08d4792" 00:08:34.856 ], 00:08:34.856 "product_name": "Malloc disk", 00:08:34.856 "block_size": 512, 00:08:34.856 "num_blocks": 65536, 00:08:34.856 "uuid": "d1343424-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.856 "assigned_rate_limits": { 00:08:34.856 "rw_ios_per_sec": 0, 00:08:34.856 "rw_mbytes_per_sec": 0, 00:08:34.856 "r_mbytes_per_sec": 0, 00:08:34.856 "w_mbytes_per_sec": 0 00:08:34.856 }, 00:08:34.856 "claimed": true, 00:08:34.856 "claim_type": "exclusive_write", 00:08:34.856 "zoned": false, 00:08:34.856 "supported_io_types": { 00:08:34.856 "read": true, 00:08:34.856 "write": true, 00:08:34.856 "unmap": true, 00:08:34.856 "write_zeroes": true, 00:08:34.856 "flush": true, 00:08:34.856 "reset": true, 00:08:34.856 "compare": false, 00:08:34.856 "compare_and_write": false, 00:08:34.856 "abort": true, 00:08:34.856 "nvme_admin": false, 00:08:34.856 "nvme_io": false 00:08:34.856 }, 00:08:34.856 "memory_domains": [ 00:08:34.856 { 00:08:34.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.856 "dma_device_type": 2 00:08:34.856 } 00:08:34.856 ], 00:08:34.856 "driver_specific": {} 00:08:34.856 } 00:08:34.856 ] 00:08:34.856 13:32:14 -- common/autotest_common.sh@895 -- # return 0 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.856 "name": "Existed_Raid", 00:08:34.856 "uuid": "d0ce4218-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.856 "strip_size_kb": 64, 00:08:34.856 "state": "configuring", 00:08:34.856 "raid_level": "raid0", 00:08:34.856 "superblock": true, 00:08:34.856 "num_base_bdevs": 4, 00:08:34.856 "num_base_bdevs_discovered": 2, 00:08:34.856 "num_base_bdevs_operational": 4, 00:08:34.856 "base_bdevs_list": [ 00:08:34.856 { 00:08:34.856 "name": "BaseBdev1", 00:08:34.856 "uuid": "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.856 "is_configured": true, 00:08:34.856 "data_offset": 2048, 00:08:34.856 "data_size": 63488 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "name": "BaseBdev2", 00:08:34.856 "uuid": "d1343424-3ec0-11ef-b9c4-5b09e08d4792", 00:08:34.856 "is_configured": true, 00:08:34.856 "data_offset": 2048, 00:08:34.856 "data_size": 63488 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "name": "BaseBdev3", 00:08:34.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.856 "is_configured": false, 00:08:34.856 "data_offset": 0, 00:08:34.856 "data_size": 0 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "name": "BaseBdev4", 00:08:34.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.856 "is_configured": false, 00:08:34.856 "data_offset": 0, 00:08:34.856 "data_size": 0 00:08:34.856 } 00:08:34.856 ] 00:08:34.856 }' 00:08:34.856 13:32:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.856 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:08:35.421 13:32:14 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.421 [2024-07-10 13:32:14.659940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.421 BaseBdev3 00:08:35.421 13:32:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:35.421 13:32:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:35.421 13:32:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:35.421 13:32:14 -- common/autotest_common.sh@889 -- # local i 00:08:35.421 13:32:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:35.421 13:32:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:35.421 13:32:14 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:35.679 13:32:14 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.937 [ 00:08:35.938 { 00:08:35.938 "name": "BaseBdev3", 00:08:35.938 "aliases": [ 00:08:35.938 "d1d1b313-3ec0-11ef-b9c4-5b09e08d4792" 00:08:35.938 ], 00:08:35.938 "product_name": "Malloc disk", 00:08:35.938 "block_size": 512, 00:08:35.938 "num_blocks": 65536, 00:08:35.938 "uuid": "d1d1b313-3ec0-11ef-b9c4-5b09e08d4792", 00:08:35.938 "assigned_rate_limits": { 00:08:35.938 "rw_ios_per_sec": 0, 00:08:35.938 "rw_mbytes_per_sec": 0, 00:08:35.938 "r_mbytes_per_sec": 0, 00:08:35.938 "w_mbytes_per_sec": 0 00:08:35.938 }, 00:08:35.938 "claimed": true, 00:08:35.938 "claim_type": "exclusive_write", 00:08:35.938 "zoned": false, 00:08:35.938 "supported_io_types": { 00:08:35.938 "read": true, 00:08:35.938 "write": true, 00:08:35.938 "unmap": true, 00:08:35.938 "write_zeroes": true, 00:08:35.938 "flush": true, 00:08:35.938 "reset": true, 00:08:35.938 "compare": false, 00:08:35.938 "compare_and_write": false, 00:08:35.938 "abort": true, 00:08:35.938 "nvme_admin": false, 00:08:35.938 "nvme_io": false 00:08:35.938 }, 00:08:35.938 "memory_domains": [ 00:08:35.938 { 00:08:35.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.938 "dma_device_type": 2 00:08:35.938 } 00:08:35.938 ], 00:08:35.938 "driver_specific": {} 00:08:35.938 } 00:08:35.938 ] 00:08:35.938 13:32:15 -- common/autotest_common.sh@895 -- # return 0 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:35.938 "name": "Existed_Raid", 00:08:35.938 "uuid": "d0ce4218-3ec0-11ef-b9c4-5b09e08d4792", 00:08:35.938 "strip_size_kb": 64, 00:08:35.938 "state": "configuring", 00:08:35.938 "raid_level": "raid0", 00:08:35.938 "superblock": true, 00:08:35.938 "num_base_bdevs": 4, 00:08:35.938 "num_base_bdevs_discovered": 3, 00:08:35.938 "num_base_bdevs_operational": 4, 00:08:35.938 "base_bdevs_list": [ 00:08:35.938 { 00:08:35.938 "name": "BaseBdev1", 00:08:35.938 "uuid": "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792", 00:08:35.938 "is_configured": true, 00:08:35.938 "data_offset": 2048, 00:08:35.938 "data_size": 63488 00:08:35.938 }, 00:08:35.938 { 00:08:35.938 "name": "BaseBdev2", 00:08:35.938 "uuid": "d1343424-3ec0-11ef-b9c4-5b09e08d4792", 00:08:35.938 "is_configured": true, 00:08:35.938 "data_offset": 2048, 00:08:35.938 "data_size": 63488 00:08:35.938 }, 00:08:35.938 { 00:08:35.938 "name": "BaseBdev3", 00:08:35.938 "uuid": "d1d1b313-3ec0-11ef-b9c4-5b09e08d4792", 00:08:35.938 "is_configured": true, 00:08:35.938 "data_offset": 2048, 00:08:35.938 "data_size": 63488 00:08:35.938 }, 00:08:35.938 { 00:08:35.938 "name": "BaseBdev4", 00:08:35.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.938 "is_configured": false, 00:08:35.938 "data_offset": 0, 00:08:35.938 "data_size": 0 00:08:35.938 } 00:08:35.938 ] 00:08:35.938 }' 00:08:35.938 13:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:35.938 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 13:32:15 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:08:36.456 [2024-07-10 13:32:15.740111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:36.456 [2024-07-10 13:32:15.740173] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a977a00 00:08:36.456 [2024-07-10 13:32:15.740177] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:36.456 [2024-07-10 13:32:15.740193] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a9daec0 00:08:36.456 [2024-07-10 13:32:15.740226] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a977a00 00:08:36.456 [2024-07-10 13:32:15.740229] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a977a00 00:08:36.456 [2024-07-10 13:32:15.740243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.456 BaseBdev4 00:08:36.456 13:32:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:08:36.456 13:32:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:08:36.456 13:32:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:36.456 13:32:15 -- common/autotest_common.sh@889 -- # local i 00:08:36.456 13:32:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:36.456 13:32:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:36.456 13:32:15 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:36.714 13:32:15 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:36.974 [ 00:08:36.974 { 00:08:36.974 "name": "BaseBdev4", 00:08:36.974 "aliases": [ 00:08:36.974 "d2768576-3ec0-11ef-b9c4-5b09e08d4792" 00:08:36.974 ], 00:08:36.974 "product_name": "Malloc disk", 00:08:36.974 "block_size": 512, 00:08:36.974 "num_blocks": 65536, 00:08:36.974 "uuid": "d2768576-3ec0-11ef-b9c4-5b09e08d4792", 00:08:36.974 "assigned_rate_limits": { 00:08:36.974 "rw_ios_per_sec": 0, 00:08:36.974 "rw_mbytes_per_sec": 0, 00:08:36.974 "r_mbytes_per_sec": 0, 00:08:36.974 "w_mbytes_per_sec": 0 00:08:36.974 }, 00:08:36.974 "claimed": true, 00:08:36.974 "claim_type": "exclusive_write", 00:08:36.974 "zoned": false, 00:08:36.974 "supported_io_types": { 00:08:36.974 "read": true, 00:08:36.974 "write": true, 00:08:36.974 "unmap": true, 00:08:36.974 "write_zeroes": true, 00:08:36.974 "flush": true, 00:08:36.974 "reset": true, 00:08:36.974 "compare": false, 00:08:36.974 "compare_and_write": false, 00:08:36.974 "abort": true, 00:08:36.974 "nvme_admin": false, 00:08:36.974 "nvme_io": false 00:08:36.974 }, 00:08:36.974 "memory_domains": [ 00:08:36.974 { 00:08:36.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.974 "dma_device_type": 2 00:08:36.974 } 00:08:36.974 ], 00:08:36.974 "driver_specific": {} 00:08:36.974 } 00:08:36.974 ] 00:08:36.974 13:32:16 -- common/autotest_common.sh@895 -- # return 0 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.974 13:32:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.233 13:32:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:37.233 "name": "Existed_Raid", 00:08:37.233 "uuid": "d0ce4218-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.233 "strip_size_kb": 64, 00:08:37.233 "state": "online", 00:08:37.234 "raid_level": "raid0", 00:08:37.234 "superblock": true, 00:08:37.234 "num_base_bdevs": 4, 00:08:37.234 "num_base_bdevs_discovered": 4, 00:08:37.234 "num_base_bdevs_operational": 4, 00:08:37.234 "base_bdevs_list": [ 00:08:37.234 { 00:08:37.234 "name": "BaseBdev1", 00:08:37.234 "uuid": "d076f6f2-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.234 "is_configured": true, 00:08:37.234 "data_offset": 2048, 00:08:37.234 "data_size": 63488 00:08:37.234 }, 00:08:37.234 { 00:08:37.234 "name": "BaseBdev2", 00:08:37.234 "uuid": "d1343424-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.234 "is_configured": true, 00:08:37.234 "data_offset": 2048, 00:08:37.234 "data_size": 63488 00:08:37.234 }, 00:08:37.234 { 00:08:37.234 "name": "BaseBdev3", 00:08:37.234 "uuid": "d1d1b313-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.234 "is_configured": true, 00:08:37.234 "data_offset": 2048, 00:08:37.234 "data_size": 63488 00:08:37.234 }, 00:08:37.234 { 00:08:37.234 "name": "BaseBdev4", 00:08:37.234 "uuid": "d2768576-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.234 "is_configured": true, 00:08:37.234 "data_offset": 2048, 00:08:37.234 "data_size": 63488 00:08:37.234 } 00:08:37.234 ] 00:08:37.234 }' 00:08:37.234 13:32:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:37.234 13:32:16 -- common/autotest_common.sh@10 -- # set +x 00:08:37.234 13:32:16 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:37.493 [2024-07-10 13:32:16.752196] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.493 [2024-07-10 13:32:16.752217] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.493 [2024-07-10 13:32:16.752226] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.493 13:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.753 13:32:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:37.753 "name": "Existed_Raid", 00:08:37.753 "uuid": "d0ce4218-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.753 "strip_size_kb": 64, 00:08:37.753 "state": "offline", 00:08:37.753 "raid_level": "raid0", 00:08:37.753 "superblock": true, 00:08:37.753 "num_base_bdevs": 4, 00:08:37.753 "num_base_bdevs_discovered": 3, 00:08:37.753 "num_base_bdevs_operational": 3, 00:08:37.753 "base_bdevs_list": [ 00:08:37.753 { 00:08:37.753 "name": null, 00:08:37.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.753 "is_configured": false, 00:08:37.753 "data_offset": 2048, 00:08:37.753 "data_size": 63488 00:08:37.753 }, 00:08:37.753 { 00:08:37.753 "name": "BaseBdev2", 00:08:37.753 "uuid": "d1343424-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.753 "is_configured": true, 00:08:37.753 "data_offset": 2048, 00:08:37.753 "data_size": 63488 00:08:37.753 }, 00:08:37.753 { 00:08:37.753 "name": "BaseBdev3", 00:08:37.753 "uuid": "d1d1b313-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.753 "is_configured": true, 00:08:37.753 "data_offset": 2048, 00:08:37.753 "data_size": 63488 00:08:37.753 }, 00:08:37.753 { 00:08:37.753 "name": "BaseBdev4", 00:08:37.753 "uuid": "d2768576-3ec0-11ef-b9c4-5b09e08d4792", 00:08:37.753 "is_configured": true, 00:08:37.753 "data_offset": 2048, 00:08:37.753 "data_size": 63488 00:08:37.753 } 00:08:37.753 ] 00:08:37.753 }' 00:08:37.753 13:32:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:37.753 13:32:16 -- common/autotest_common.sh@10 -- # set +x 00:08:38.012 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:38.012 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:38.012 13:32:17 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.012 13:32:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:38.271 [2024-07-10 13:32:17.605171] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.271 13:32:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:38.530 13:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:38.530 13:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.530 13:32:17 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:38.789 [2024-07-10 13:32:17.957932] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.789 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:38.789 13:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:38.789 13:32:17 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.789 13:32:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:38.789 13:32:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:38.789 13:32:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.789 13:32:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:08:39.048 [2024-07-10 13:32:18.322629] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:39.048 [2024-07-10 13:32:18.322651] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a977a00 name Existed_Raid, state offline 00:08:39.048 13:32:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:39.048 13:32:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:39.048 13:32:18 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.048 13:32:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.308 13:32:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:39.308 13:32:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:39.308 13:32:18 -- bdev/bdev_raid.sh@287 -- # killprocess 51700 00:08:39.308 13:32:18 -- common/autotest_common.sh@926 -- # '[' -z 51700 ']' 00:08:39.308 13:32:18 -- common/autotest_common.sh@930 -- # kill -0 51700 00:08:39.308 13:32:18 -- common/autotest_common.sh@931 -- # uname 00:08:39.308 13:32:18 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:39.308 13:32:18 -- common/autotest_common.sh@934 -- # ps -c -o command 51700 00:08:39.308 13:32:18 -- common/autotest_common.sh@934 -- # tail -1 00:08:39.308 13:32:18 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:39.308 13:32:18 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:39.308 killing process with pid 51700 00:08:39.308 13:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51700' 00:08:39.308 13:32:18 -- common/autotest_common.sh@945 -- # kill 51700 00:08:39.308 [2024-07-10 13:32:18.514144] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.308 [2024-07-10 13:32:18.514176] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.308 13:32:18 -- common/autotest_common.sh@950 -- # wait 51700 00:08:39.308 13:32:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:39.308 00:08:39.308 real 0m9.849s 00:08:39.308 user 0m17.273s 00:08:39.308 sys 0m1.733s 00:08:39.308 13:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.308 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 ************************************ 00:08:39.308 END TEST raid_state_function_test_sb 00:08:39.308 ************************************ 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:08:39.568 13:32:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:39.568 13:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.568 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.568 ************************************ 00:08:39.568 START TEST raid_superblock_test 00:08:39.568 ************************************ 00:08:39.568 13:32:18 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=51973 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51973 /var/tmp/spdk-raid.sock 00:08:39.568 13:32:18 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:39.568 13:32:18 -- common/autotest_common.sh@819 -- # '[' -z 51973 ']' 00:08:39.568 13:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:39.568 13:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:39.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:39.568 13:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:39.568 13:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:39.568 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.568 [2024-07-10 13:32:18.721493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:39.568 [2024-07-10 13:32:18.721759] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:39.828 EAL: TSC is not safe to use in SMP mode 00:08:39.828 EAL: TSC is not invariant 00:08:39.828 [2024-07-10 13:32:19.148080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.087 [2024-07-10 13:32:19.225719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.087 [2024-07-10 13:32:19.226145] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.087 [2024-07-10 13:32:19.226169] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.347 13:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.347 13:32:19 -- common/autotest_common.sh@852 -- # return 0 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.347 13:32:19 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:40.606 malloc1 00:08:40.606 13:32:19 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.869 [2024-07-10 13:32:19.997197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.869 [2024-07-10 13:32:19.997246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.869 [2024-07-10 13:32:19.997746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb0780 00:08:40.869 [2024-07-10 13:32:19.997768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.869 [2024-07-10 13:32:19.998410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.869 [2024-07-10 13:32:19.998438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.869 pt1 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:40.869 malloc2 00:08:40.869 13:32:20 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.146 [2024-07-10 13:32:20.373248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.146 [2024-07-10 13:32:20.373292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.146 [2024-07-10 13:32:20.373314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb0c80 00:08:41.146 [2024-07-10 13:32:20.373319] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.146 [2024-07-10 13:32:20.373760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.146 [2024-07-10 13:32:20.373786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.146 pt2 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.146 13:32:20 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:41.405 malloc3 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.405 [2024-07-10 13:32:20.717306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.405 [2024-07-10 13:32:20.717349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.405 [2024-07-10 13:32:20.717369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb1180 00:08:41.405 [2024-07-10 13:32:20.717391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.405 [2024-07-10 13:32:20.717822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.405 [2024-07-10 13:32:20.717849] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.405 pt3 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.405 13:32:20 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:08:41.665 malloc4 00:08:41.665 13:32:20 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:41.925 [2024-07-10 13:32:21.085365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:41.925 [2024-07-10 13:32:21.085435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.925 [2024-07-10 13:32:21.085456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb1680 00:08:41.925 [2024-07-10 13:32:21.085462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.925 [2024-07-10 13:32:21.085895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.925 [2024-07-10 13:32:21.085922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:41.925 pt4 00:08:41.925 13:32:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:41.925 13:32:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:41.925 13:32:21 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:08:41.925 [2024-07-10 13:32:21.281409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.925 [2024-07-10 13:32:21.281827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.925 [2024-07-10 13:32:21.281847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.925 [2024-07-10 13:32:21.281854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:41.925 [2024-07-10 13:32:21.281898] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abb1900 00:08:41.925 [2024-07-10 13:32:21.281903] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:41.925 [2024-07-10 13:32:21.281930] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac13e20 00:08:41.925 [2024-07-10 13:32:21.281979] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abb1900 00:08:41.925 [2024-07-10 13:32:21.281982] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abb1900 00:08:41.925 [2024-07-10 13:32:21.282000] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.184 13:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:42.184 "name": "raid_bdev1", 00:08:42.184 "uuid": "d5c410d3-3ec0-11ef-b9c4-5b09e08d4792", 00:08:42.184 "strip_size_kb": 64, 00:08:42.184 "state": "online", 00:08:42.184 "raid_level": "raid0", 00:08:42.184 "superblock": true, 00:08:42.184 "num_base_bdevs": 4, 00:08:42.184 "num_base_bdevs_discovered": 4, 00:08:42.184 "num_base_bdevs_operational": 4, 00:08:42.184 "base_bdevs_list": [ 00:08:42.184 { 00:08:42.184 "name": "pt1", 00:08:42.184 "uuid": "6bb105cf-28d1-5450-aef8-5fcd6de16cb0", 00:08:42.184 "is_configured": true, 00:08:42.184 "data_offset": 2048, 00:08:42.184 "data_size": 63488 00:08:42.184 }, 00:08:42.184 { 00:08:42.184 "name": "pt2", 00:08:42.184 "uuid": "f0e9fb9c-93d2-9459-871b-67f2f60f0b64", 00:08:42.185 "is_configured": true, 00:08:42.185 "data_offset": 2048, 00:08:42.185 "data_size": 63488 00:08:42.185 }, 00:08:42.185 { 00:08:42.185 "name": "pt3", 00:08:42.185 "uuid": "3b273773-02dc-b953-9b01-2996e854bd20", 00:08:42.185 "is_configured": true, 00:08:42.185 "data_offset": 2048, 00:08:42.185 "data_size": 63488 00:08:42.185 }, 00:08:42.185 { 00:08:42.185 "name": "pt4", 00:08:42.185 "uuid": "b58cc3c7-8a2a-175f-8b8d-3da613ad2bad", 00:08:42.185 "is_configured": true, 00:08:42.185 "data_offset": 2048, 00:08:42.185 "data_size": 63488 00:08:42.185 } 00:08:42.185 ] 00:08:42.185 }' 00:08:42.185 13:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:42.185 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 13:32:21 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:42.444 13:32:21 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:42.704 [2024-07-10 13:32:21.917522] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.704 13:32:21 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d5c410d3-3ec0-11ef-b9c4-5b09e08d4792 00:08:42.704 13:32:21 -- bdev/bdev_raid.sh@380 -- # '[' -z d5c410d3-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:08:42.704 13:32:21 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:42.964 [2024-07-10 13:32:22.105520] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.964 [2024-07-10 13:32:22.105537] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.964 [2024-07-10 13:32:22.105549] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.964 [2024-07-10 13:32:22.105558] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.964 [2024-07-10 13:32:22.105561] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb1900 name raid_bdev1, state offline 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.964 13:32:22 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:43.224 13:32:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.224 13:32:22 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:43.483 13:32:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.483 13:32:22 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:43.483 13:32:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.483 13:32:22 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:08:43.742 13:32:23 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:43.742 13:32:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.001 13:32:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:44.001 13:32:23 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:08:44.001 13:32:23 -- common/autotest_common.sh@640 -- # local es=0 00:08:44.001 13:32:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:08:44.001 13:32:23 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.001 13:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.001 13:32:23 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.001 13:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.001 13:32:23 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.001 13:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.001 13:32:23 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.001 13:32:23 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:44.001 13:32:23 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:08:44.260 [2024-07-10 13:32:23.449759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.260 [2024-07-10 13:32:23.450209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.260 [2024-07-10 13:32:23.450227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:44.260 [2024-07-10 13:32:23.450233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:44.260 [2024-07-10 13:32:23.450243] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:44.260 [2024-07-10 13:32:23.450274] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:44.260 [2024-07-10 13:32:23.450282] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:44.260 [2024-07-10 13:32:23.450289] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:08:44.260 [2024-07-10 13:32:23.450294] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.260 [2024-07-10 13:32:23.450298] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb1680 name raid_bdev1, state configuring 00:08:44.260 request: 00:08:44.260 { 00:08:44.260 "name": "raid_bdev1", 00:08:44.260 "raid_level": "raid0", 00:08:44.260 "base_bdevs": [ 00:08:44.260 "malloc1", 00:08:44.260 "malloc2", 00:08:44.260 "malloc3", 00:08:44.260 "malloc4" 00:08:44.260 ], 00:08:44.260 "superblock": false, 00:08:44.260 "strip_size_kb": 64, 00:08:44.260 "method": "bdev_raid_create", 00:08:44.260 "req_id": 1 00:08:44.260 } 00:08:44.260 Got JSON-RPC error response 00:08:44.260 response: 00:08:44.260 { 00:08:44.260 "code": -17, 00:08:44.260 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.260 } 00:08:44.260 13:32:23 -- common/autotest_common.sh@643 -- # es=1 00:08:44.260 13:32:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:44.260 13:32:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:44.260 13:32:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:44.260 13:32:23 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.260 13:32:23 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.519 [2024-07-10 13:32:23.797791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.519 [2024-07-10 13:32:23.797826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.519 [2024-07-10 13:32:23.797848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb1180 00:08:44.519 [2024-07-10 13:32:23.797854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.519 [2024-07-10 13:32:23.798316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.519 [2024-07-10 13:32:23.798345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.519 [2024-07-10 13:32:23.798360] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:44.519 [2024-07-10 13:32:23.798370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.519 pt1 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.519 13:32:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.778 13:32:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:44.778 "name": "raid_bdev1", 00:08:44.778 "uuid": "d5c410d3-3ec0-11ef-b9c4-5b09e08d4792", 00:08:44.778 "strip_size_kb": 64, 00:08:44.778 "state": "configuring", 00:08:44.778 "raid_level": "raid0", 00:08:44.778 "superblock": true, 00:08:44.778 "num_base_bdevs": 4, 00:08:44.778 "num_base_bdevs_discovered": 1, 00:08:44.778 "num_base_bdevs_operational": 4, 00:08:44.778 "base_bdevs_list": [ 00:08:44.778 { 00:08:44.778 "name": "pt1", 00:08:44.778 "uuid": "6bb105cf-28d1-5450-aef8-5fcd6de16cb0", 00:08:44.778 "is_configured": true, 00:08:44.778 "data_offset": 2048, 00:08:44.778 "data_size": 63488 00:08:44.778 }, 00:08:44.778 { 00:08:44.778 "name": null, 00:08:44.778 "uuid": "f0e9fb9c-93d2-9459-871b-67f2f60f0b64", 00:08:44.778 "is_configured": false, 00:08:44.778 "data_offset": 2048, 00:08:44.778 "data_size": 63488 00:08:44.778 }, 00:08:44.778 { 00:08:44.778 "name": null, 00:08:44.778 "uuid": "3b273773-02dc-b953-9b01-2996e854bd20", 00:08:44.778 "is_configured": false, 00:08:44.778 "data_offset": 2048, 00:08:44.778 "data_size": 63488 00:08:44.778 }, 00:08:44.778 { 00:08:44.778 "name": null, 00:08:44.778 "uuid": "b58cc3c7-8a2a-175f-8b8d-3da613ad2bad", 00:08:44.778 "is_configured": false, 00:08:44.778 "data_offset": 2048, 00:08:44.778 "data_size": 63488 00:08:44.778 } 00:08:44.778 ] 00:08:44.778 }' 00:08:44.778 13:32:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:44.778 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 13:32:24 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:08:45.039 13:32:24 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.299 [2024-07-10 13:32:24.445880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.299 [2024-07-10 13:32:24.445935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.299 [2024-07-10 13:32:24.445957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb0780 00:08:45.299 [2024-07-10 13:32:24.445963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.299 [2024-07-10 13:32:24.446028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.299 [2024-07-10 13:32:24.446034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.299 [2024-07-10 13:32:24.446047] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:45.299 [2024-07-10 13:32:24.446052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.299 pt2 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:45.299 [2024-07-10 13:32:24.637904] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.299 13:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.559 13:32:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:45.559 "name": "raid_bdev1", 00:08:45.559 "uuid": "d5c410d3-3ec0-11ef-b9c4-5b09e08d4792", 00:08:45.559 "strip_size_kb": 64, 00:08:45.559 "state": "configuring", 00:08:45.559 "raid_level": "raid0", 00:08:45.559 "superblock": true, 00:08:45.559 "num_base_bdevs": 4, 00:08:45.559 "num_base_bdevs_discovered": 1, 00:08:45.559 "num_base_bdevs_operational": 4, 00:08:45.559 "base_bdevs_list": [ 00:08:45.559 { 00:08:45.559 "name": "pt1", 00:08:45.559 "uuid": "6bb105cf-28d1-5450-aef8-5fcd6de16cb0", 00:08:45.559 "is_configured": true, 00:08:45.559 "data_offset": 2048, 00:08:45.559 "data_size": 63488 00:08:45.559 }, 00:08:45.559 { 00:08:45.559 "name": null, 00:08:45.559 "uuid": "f0e9fb9c-93d2-9459-871b-67f2f60f0b64", 00:08:45.559 "is_configured": false, 00:08:45.559 "data_offset": 2048, 00:08:45.559 "data_size": 63488 00:08:45.559 }, 00:08:45.559 { 00:08:45.559 "name": null, 00:08:45.559 "uuid": "3b273773-02dc-b953-9b01-2996e854bd20", 00:08:45.559 "is_configured": false, 00:08:45.559 "data_offset": 2048, 00:08:45.559 "data_size": 63488 00:08:45.559 }, 00:08:45.559 { 00:08:45.559 "name": null, 00:08:45.559 "uuid": "b58cc3c7-8a2a-175f-8b8d-3da613ad2bad", 00:08:45.559 "is_configured": false, 00:08:45.559 "data_offset": 2048, 00:08:45.559 "data_size": 63488 00:08:45.559 } 00:08:45.559 ] 00:08:45.559 }' 00:08:45.559 13:32:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:45.559 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.819 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:45.819 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:45.819 13:32:25 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.079 [2024-07-10 13:32:25.297999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.079 [2024-07-10 13:32:25.298038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.079 [2024-07-10 13:32:25.298056] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb0780 00:08:46.079 [2024-07-10 13:32:25.298062] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.079 [2024-07-10 13:32:25.298118] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.079 [2024-07-10 13:32:25.298124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.079 [2024-07-10 13:32:25.298136] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:46.079 [2024-07-10 13:32:25.298142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.079 pt2 00:08:46.079 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:46.079 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.079 13:32:25 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.340 [2024-07-10 13:32:25.478026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.340 [2024-07-10 13:32:25.478066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.340 [2024-07-10 13:32:25.478079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb1b80 00:08:46.340 [2024-07-10 13:32:25.478084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.340 [2024-07-10 13:32:25.478150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.340 [2024-07-10 13:32:25.478156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.340 [2024-07-10 13:32:25.478167] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:46.340 [2024-07-10 13:32:25.478172] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.340 pt3 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:46.340 [2024-07-10 13:32:25.666051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:46.340 [2024-07-10 13:32:25.666090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.340 [2024-07-10 13:32:25.666102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb1900 00:08:46.340 [2024-07-10 13:32:25.666109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.340 [2024-07-10 13:32:25.666156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.340 [2024-07-10 13:32:25.666162] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:46.340 [2024-07-10 13:32:25.666173] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:08:46.340 [2024-07-10 13:32:25.666178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:46.340 [2024-07-10 13:32:25.666198] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abb0c80 00:08:46.340 [2024-07-10 13:32:25.666201] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:46.340 [2024-07-10 13:32:25.666216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac13e20 00:08:46.340 [2024-07-10 13:32:25.666250] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abb0c80 00:08:46.340 [2024-07-10 13:32:25.666253] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abb0c80 00:08:46.340 [2024-07-10 13:32:25.666266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.340 pt4 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.340 13:32:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.605 13:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:46.605 "name": "raid_bdev1", 00:08:46.605 "uuid": "d5c410d3-3ec0-11ef-b9c4-5b09e08d4792", 00:08:46.605 "strip_size_kb": 64, 00:08:46.605 "state": "online", 00:08:46.605 "raid_level": "raid0", 00:08:46.605 "superblock": true, 00:08:46.605 "num_base_bdevs": 4, 00:08:46.605 "num_base_bdevs_discovered": 4, 00:08:46.605 "num_base_bdevs_operational": 4, 00:08:46.605 "base_bdevs_list": [ 00:08:46.605 { 00:08:46.605 "name": "pt1", 00:08:46.605 "uuid": "6bb105cf-28d1-5450-aef8-5fcd6de16cb0", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 2048, 00:08:46.605 "data_size": 63488 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "name": "pt2", 00:08:46.605 "uuid": "f0e9fb9c-93d2-9459-871b-67f2f60f0b64", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 2048, 00:08:46.605 "data_size": 63488 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "name": "pt3", 00:08:46.605 "uuid": "3b273773-02dc-b953-9b01-2996e854bd20", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 2048, 00:08:46.605 "data_size": 63488 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "name": "pt4", 00:08:46.605 "uuid": "b58cc3c7-8a2a-175f-8b8d-3da613ad2bad", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 2048, 00:08:46.605 "data_size": 63488 00:08:46.605 } 00:08:46.605 ] 00:08:46.605 }' 00:08:46.605 13:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:46.605 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 13:32:26 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:46.873 13:32:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:47.132 [2024-07-10 13:32:26.314165] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.132 13:32:26 -- bdev/bdev_raid.sh@430 -- # '[' d5c410d3-3ec0-11ef-b9c4-5b09e08d4792 '!=' d5c410d3-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:08:47.132 13:32:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:08:47.132 13:32:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:47.132 13:32:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:47.132 13:32:26 -- bdev/bdev_raid.sh@511 -- # killprocess 51973 00:08:47.132 13:32:26 -- common/autotest_common.sh@926 -- # '[' -z 51973 ']' 00:08:47.132 13:32:26 -- common/autotest_common.sh@930 -- # kill -0 51973 00:08:47.132 13:32:26 -- common/autotest_common.sh@931 -- # uname 00:08:47.132 13:32:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:47.132 13:32:26 -- common/autotest_common.sh@934 -- # tail -1 00:08:47.132 13:32:26 -- common/autotest_common.sh@934 -- # ps -c -o command 51973 00:08:47.132 13:32:26 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:47.132 13:32:26 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:47.132 killing process with pid 51973 00:08:47.132 13:32:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51973' 00:08:47.132 13:32:26 -- common/autotest_common.sh@945 -- # kill 51973 00:08:47.132 [2024-07-10 13:32:26.348237] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.132 [2024-07-10 13:32:26.348254] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.132 [2024-07-10 13:32:26.348276] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.132 [2024-07-10 13:32:26.348279] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb0c80 name raid_bdev1, state offline 00:08:47.132 13:32:26 -- common/autotest_common.sh@950 -- # wait 51973 00:08:47.132 [2024-07-10 13:32:26.366901] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:47.391 00:08:47.391 real 0m7.804s 00:08:47.391 user 0m13.443s 00:08:47.391 sys 0m1.434s 00:08:47.391 13:32:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.391 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.391 ************************************ 00:08:47.391 END TEST raid_superblock_test 00:08:47.391 ************************************ 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:08:47.391 13:32:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:47.391 13:32:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.391 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.391 ************************************ 00:08:47.391 START TEST raid_state_function_test 00:08:47.391 ************************************ 00:08:47.391 13:32:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:47.391 13:32:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=52158 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52158' 00:08:47.392 Process raid pid: 52158 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:47.392 13:32:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52158 /var/tmp/spdk-raid.sock 00:08:47.392 13:32:26 -- common/autotest_common.sh@819 -- # '[' -z 52158 ']' 00:08:47.392 13:32:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:47.392 13:32:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:47.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:47.392 13:32:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:47.392 13:32:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:47.392 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 [2024-07-10 13:32:26.584864] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:47.392 [2024-07-10 13:32:26.585173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:47.651 EAL: TSC is not safe to use in SMP mode 00:08:47.651 EAL: TSC is not invariant 00:08:47.651 [2024-07-10 13:32:27.018440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.911 [2024-07-10 13:32:27.094314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.911 [2024-07-10 13:32:27.094736] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.911 [2024-07-10 13:32:27.094744] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.169 13:32:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.169 13:32:27 -- common/autotest_common.sh@852 -- # return 0 00:08:48.169 13:32:27 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:48.427 [2024-07-10 13:32:27.661823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.427 [2024-07-10 13:32:27.661867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.427 [2024-07-10 13:32:27.661871] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.427 [2024-07-10 13:32:27.661877] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.427 [2024-07-10 13:32:27.661879] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.427 [2024-07-10 13:32:27.661884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.427 [2024-07-10 13:32:27.661887] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.427 [2024-07-10 13:32:27.661892] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.427 13:32:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.686 13:32:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:48.686 "name": "Existed_Raid", 00:08:48.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.686 "strip_size_kb": 64, 00:08:48.686 "state": "configuring", 00:08:48.686 "raid_level": "concat", 00:08:48.686 "superblock": false, 00:08:48.686 "num_base_bdevs": 4, 00:08:48.686 "num_base_bdevs_discovered": 0, 00:08:48.686 "num_base_bdevs_operational": 4, 00:08:48.686 "base_bdevs_list": [ 00:08:48.686 { 00:08:48.686 "name": "BaseBdev1", 00:08:48.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.686 "is_configured": false, 00:08:48.686 "data_offset": 0, 00:08:48.686 "data_size": 0 00:08:48.686 }, 00:08:48.686 { 00:08:48.686 "name": "BaseBdev2", 00:08:48.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.686 "is_configured": false, 00:08:48.686 "data_offset": 0, 00:08:48.686 "data_size": 0 00:08:48.686 }, 00:08:48.686 { 00:08:48.686 "name": "BaseBdev3", 00:08:48.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.686 "is_configured": false, 00:08:48.686 "data_offset": 0, 00:08:48.686 "data_size": 0 00:08:48.686 }, 00:08:48.686 { 00:08:48.686 "name": "BaseBdev4", 00:08:48.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.686 "is_configured": false, 00:08:48.686 "data_offset": 0, 00:08:48.686 "data_size": 0 00:08:48.686 } 00:08:48.686 ] 00:08:48.686 }' 00:08:48.686 13:32:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:48.686 13:32:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.946 13:32:28 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:48.946 [2024-07-10 13:32:28.309887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.946 [2024-07-10 13:32:28.309909] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a99b500 name Existed_Raid, state configuring 00:08:49.205 13:32:28 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:49.205 [2024-07-10 13:32:28.501918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.205 [2024-07-10 13:32:28.501959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.205 [2024-07-10 13:32:28.501962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.205 [2024-07-10 13:32:28.501968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.205 [2024-07-10 13:32:28.501970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.205 [2024-07-10 13:32:28.502009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.205 [2024-07-10 13:32:28.502012] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:49.205 [2024-07-10 13:32:28.502017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:49.205 13:32:28 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.464 [2024-07-10 13:32:28.686717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.464 BaseBdev1 00:08:49.464 13:32:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:49.464 13:32:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:49.464 13:32:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:49.464 13:32:28 -- common/autotest_common.sh@889 -- # local i 00:08:49.464 13:32:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:49.464 13:32:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:49.464 13:32:28 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:49.724 13:32:28 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.724 [ 00:08:49.724 { 00:08:49.724 "name": "BaseBdev1", 00:08:49.724 "aliases": [ 00:08:49.724 "da2de904-3ec0-11ef-b9c4-5b09e08d4792" 00:08:49.724 ], 00:08:49.724 "product_name": "Malloc disk", 00:08:49.724 "block_size": 512, 00:08:49.724 "num_blocks": 65536, 00:08:49.724 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:49.724 "assigned_rate_limits": { 00:08:49.724 "rw_ios_per_sec": 0, 00:08:49.724 "rw_mbytes_per_sec": 0, 00:08:49.724 "r_mbytes_per_sec": 0, 00:08:49.724 "w_mbytes_per_sec": 0 00:08:49.724 }, 00:08:49.724 "claimed": true, 00:08:49.724 "claim_type": "exclusive_write", 00:08:49.724 "zoned": false, 00:08:49.724 "supported_io_types": { 00:08:49.724 "read": true, 00:08:49.724 "write": true, 00:08:49.724 "unmap": true, 00:08:49.724 "write_zeroes": true, 00:08:49.724 "flush": true, 00:08:49.724 "reset": true, 00:08:49.724 "compare": false, 00:08:49.724 "compare_and_write": false, 00:08:49.724 "abort": true, 00:08:49.724 "nvme_admin": false, 00:08:49.724 "nvme_io": false 00:08:49.724 }, 00:08:49.724 "memory_domains": [ 00:08:49.724 { 00:08:49.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.724 "dma_device_type": 2 00:08:49.724 } 00:08:49.724 ], 00:08:49.724 "driver_specific": {} 00:08:49.724 } 00:08:49.724 ] 00:08:49.724 13:32:29 -- common/autotest_common.sh@895 -- # return 0 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.724 13:32:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.983 13:32:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:49.983 "name": "Existed_Raid", 00:08:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.983 "strip_size_kb": 64, 00:08:49.983 "state": "configuring", 00:08:49.983 "raid_level": "concat", 00:08:49.983 "superblock": false, 00:08:49.983 "num_base_bdevs": 4, 00:08:49.983 "num_base_bdevs_discovered": 1, 00:08:49.983 "num_base_bdevs_operational": 4, 00:08:49.983 "base_bdevs_list": [ 00:08:49.983 { 00:08:49.983 "name": "BaseBdev1", 00:08:49.983 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:49.983 "is_configured": true, 00:08:49.983 "data_offset": 0, 00:08:49.983 "data_size": 65536 00:08:49.983 }, 00:08:49.983 { 00:08:49.983 "name": "BaseBdev2", 00:08:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.983 "is_configured": false, 00:08:49.983 "data_offset": 0, 00:08:49.983 "data_size": 0 00:08:49.983 }, 00:08:49.983 { 00:08:49.983 "name": "BaseBdev3", 00:08:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.983 "is_configured": false, 00:08:49.983 "data_offset": 0, 00:08:49.983 "data_size": 0 00:08:49.983 }, 00:08:49.983 { 00:08:49.983 "name": "BaseBdev4", 00:08:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.983 "is_configured": false, 00:08:49.983 "data_offset": 0, 00:08:49.983 "data_size": 0 00:08:49.983 } 00:08:49.983 ] 00:08:49.983 }' 00:08:49.983 13:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:49.983 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.243 13:32:29 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:50.501 [2024-07-10 13:32:29.710089] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.501 [2024-07-10 13:32:29.710111] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a99b500 name Existed_Raid, state configuring 00:08:50.501 13:32:29 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:50.501 13:32:29 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:50.501 [2024-07-10 13:32:29.870130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.501 [2024-07-10 13:32:29.870722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.501 [2024-07-10 13:32:29.870757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.501 [2024-07-10 13:32:29.870760] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.501 [2024-07-10 13:32:29.870766] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.501 [2024-07-10 13:32:29.870768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:50.501 [2024-07-10 13:32:29.870774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.759 13:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.759 13:32:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:50.759 "name": "Existed_Raid", 00:08:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.759 "strip_size_kb": 64, 00:08:50.759 "state": "configuring", 00:08:50.759 "raid_level": "concat", 00:08:50.759 "superblock": false, 00:08:50.759 "num_base_bdevs": 4, 00:08:50.759 "num_base_bdevs_discovered": 1, 00:08:50.759 "num_base_bdevs_operational": 4, 00:08:50.759 "base_bdevs_list": [ 00:08:50.759 { 00:08:50.759 "name": "BaseBdev1", 00:08:50.759 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:50.759 "is_configured": true, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 65536 00:08:50.759 }, 00:08:50.759 { 00:08:50.759 "name": "BaseBdev2", 00:08:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.759 "is_configured": false, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 0 00:08:50.759 }, 00:08:50.759 { 00:08:50.759 "name": "BaseBdev3", 00:08:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.759 "is_configured": false, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 0 00:08:50.759 }, 00:08:50.759 { 00:08:50.759 "name": "BaseBdev4", 00:08:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.759 "is_configured": false, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 0 00:08:50.759 } 00:08:50.759 ] 00:08:50.759 }' 00:08:50.759 13:32:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:50.759 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.016 13:32:30 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.274 [2024-07-10 13:32:30.530328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.274 BaseBdev2 00:08:51.274 13:32:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:51.274 13:32:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:51.274 13:32:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:51.274 13:32:30 -- common/autotest_common.sh@889 -- # local i 00:08:51.274 13:32:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:51.274 13:32:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:51.274 13:32:30 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:51.532 13:32:30 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.812 [ 00:08:51.812 { 00:08:51.812 "name": "BaseBdev2", 00:08:51.812 "aliases": [ 00:08:51.812 "db475370-3ec0-11ef-b9c4-5b09e08d4792" 00:08:51.812 ], 00:08:51.812 "product_name": "Malloc disk", 00:08:51.812 "block_size": 512, 00:08:51.812 "num_blocks": 65536, 00:08:51.812 "uuid": "db475370-3ec0-11ef-b9c4-5b09e08d4792", 00:08:51.812 "assigned_rate_limits": { 00:08:51.812 "rw_ios_per_sec": 0, 00:08:51.812 "rw_mbytes_per_sec": 0, 00:08:51.812 "r_mbytes_per_sec": 0, 00:08:51.812 "w_mbytes_per_sec": 0 00:08:51.812 }, 00:08:51.812 "claimed": true, 00:08:51.812 "claim_type": "exclusive_write", 00:08:51.812 "zoned": false, 00:08:51.812 "supported_io_types": { 00:08:51.812 "read": true, 00:08:51.812 "write": true, 00:08:51.812 "unmap": true, 00:08:51.812 "write_zeroes": true, 00:08:51.812 "flush": true, 00:08:51.812 "reset": true, 00:08:51.812 "compare": false, 00:08:51.812 "compare_and_write": false, 00:08:51.812 "abort": true, 00:08:51.812 "nvme_admin": false, 00:08:51.812 "nvme_io": false 00:08:51.812 }, 00:08:51.812 "memory_domains": [ 00:08:51.812 { 00:08:51.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.812 "dma_device_type": 2 00:08:51.812 } 00:08:51.812 ], 00:08:51.812 "driver_specific": {} 00:08:51.812 } 00:08:51.812 ] 00:08:51.812 13:32:30 -- common/autotest_common.sh@895 -- # return 0 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.812 13:32:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.812 13:32:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:51.812 "name": "Existed_Raid", 00:08:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.812 "strip_size_kb": 64, 00:08:51.812 "state": "configuring", 00:08:51.812 "raid_level": "concat", 00:08:51.812 "superblock": false, 00:08:51.812 "num_base_bdevs": 4, 00:08:51.812 "num_base_bdevs_discovered": 2, 00:08:51.812 "num_base_bdevs_operational": 4, 00:08:51.812 "base_bdevs_list": [ 00:08:51.812 { 00:08:51.812 "name": "BaseBdev1", 00:08:51.812 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:51.812 "is_configured": true, 00:08:51.812 "data_offset": 0, 00:08:51.813 "data_size": 65536 00:08:51.813 }, 00:08:51.813 { 00:08:51.813 "name": "BaseBdev2", 00:08:51.813 "uuid": "db475370-3ec0-11ef-b9c4-5b09e08d4792", 00:08:51.813 "is_configured": true, 00:08:51.813 "data_offset": 0, 00:08:51.813 "data_size": 65536 00:08:51.813 }, 00:08:51.813 { 00:08:51.813 "name": "BaseBdev3", 00:08:51.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.813 "is_configured": false, 00:08:51.813 "data_offset": 0, 00:08:51.813 "data_size": 0 00:08:51.813 }, 00:08:51.813 { 00:08:51.813 "name": "BaseBdev4", 00:08:51.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.813 "is_configured": false, 00:08:51.813 "data_offset": 0, 00:08:51.813 "data_size": 0 00:08:51.813 } 00:08:51.813 ] 00:08:51.813 }' 00:08:51.813 13:32:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:51.813 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.070 13:32:31 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:52.328 [2024-07-10 13:32:31.594446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.328 BaseBdev3 00:08:52.328 13:32:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:52.328 13:32:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:52.328 13:32:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:52.328 13:32:31 -- common/autotest_common.sh@889 -- # local i 00:08:52.328 13:32:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:52.328 13:32:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:52.328 13:32:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:52.587 13:32:31 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.846 [ 00:08:52.846 { 00:08:52.846 "name": "BaseBdev3", 00:08:52.846 "aliases": [ 00:08:52.846 "dbe9b3bd-3ec0-11ef-b9c4-5b09e08d4792" 00:08:52.846 ], 00:08:52.846 "product_name": "Malloc disk", 00:08:52.846 "block_size": 512, 00:08:52.846 "num_blocks": 65536, 00:08:52.846 "uuid": "dbe9b3bd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:52.846 "assigned_rate_limits": { 00:08:52.846 "rw_ios_per_sec": 0, 00:08:52.846 "rw_mbytes_per_sec": 0, 00:08:52.846 "r_mbytes_per_sec": 0, 00:08:52.846 "w_mbytes_per_sec": 0 00:08:52.846 }, 00:08:52.846 "claimed": true, 00:08:52.846 "claim_type": "exclusive_write", 00:08:52.846 "zoned": false, 00:08:52.846 "supported_io_types": { 00:08:52.846 "read": true, 00:08:52.846 "write": true, 00:08:52.846 "unmap": true, 00:08:52.846 "write_zeroes": true, 00:08:52.846 "flush": true, 00:08:52.846 "reset": true, 00:08:52.846 "compare": false, 00:08:52.846 "compare_and_write": false, 00:08:52.846 "abort": true, 00:08:52.846 "nvme_admin": false, 00:08:52.846 "nvme_io": false 00:08:52.846 }, 00:08:52.846 "memory_domains": [ 00:08:52.846 { 00:08:52.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.846 "dma_device_type": 2 00:08:52.846 } 00:08:52.846 ], 00:08:52.846 "driver_specific": {} 00:08:52.846 } 00:08:52.846 ] 00:08:52.846 13:32:31 -- common/autotest_common.sh@895 -- # return 0 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.847 13:32:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.847 13:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.847 "name": "Existed_Raid", 00:08:52.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.847 "strip_size_kb": 64, 00:08:52.847 "state": "configuring", 00:08:52.847 "raid_level": "concat", 00:08:52.847 "superblock": false, 00:08:52.847 "num_base_bdevs": 4, 00:08:52.847 "num_base_bdevs_discovered": 3, 00:08:52.847 "num_base_bdevs_operational": 4, 00:08:52.847 "base_bdevs_list": [ 00:08:52.847 { 00:08:52.847 "name": "BaseBdev1", 00:08:52.847 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:52.847 "is_configured": true, 00:08:52.847 "data_offset": 0, 00:08:52.847 "data_size": 65536 00:08:52.847 }, 00:08:52.847 { 00:08:52.847 "name": "BaseBdev2", 00:08:52.847 "uuid": "db475370-3ec0-11ef-b9c4-5b09e08d4792", 00:08:52.847 "is_configured": true, 00:08:52.847 "data_offset": 0, 00:08:52.847 "data_size": 65536 00:08:52.847 }, 00:08:52.847 { 00:08:52.847 "name": "BaseBdev3", 00:08:52.847 "uuid": "dbe9b3bd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:52.847 "is_configured": true, 00:08:52.847 "data_offset": 0, 00:08:52.847 "data_size": 65536 00:08:52.847 }, 00:08:52.847 { 00:08:52.847 "name": "BaseBdev4", 00:08:52.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.847 "is_configured": false, 00:08:52.847 "data_offset": 0, 00:08:52.847 "data_size": 0 00:08:52.847 } 00:08:52.847 ] 00:08:52.847 }' 00:08:52.847 13:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.847 13:32:32 -- common/autotest_common.sh@10 -- # set +x 00:08:53.106 13:32:32 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:08:53.365 [2024-07-10 13:32:32.626569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:53.365 [2024-07-10 13:32:32.626591] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a99ba00 00:08:53.365 [2024-07-10 13:32:32.626594] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:53.365 [2024-07-10 13:32:32.626615] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a9feec0 00:08:53.365 [2024-07-10 13:32:32.626687] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a99ba00 00:08:53.365 [2024-07-10 13:32:32.626690] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a99ba00 00:08:53.365 [2024-07-10 13:32:32.626712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.365 BaseBdev4 00:08:53.365 13:32:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:08:53.366 13:32:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:08:53.366 13:32:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:53.366 13:32:32 -- common/autotest_common.sh@889 -- # local i 00:08:53.366 13:32:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:53.366 13:32:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:53.366 13:32:32 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:53.625 13:32:32 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:53.883 [ 00:08:53.884 { 00:08:53.884 "name": "BaseBdev4", 00:08:53.884 "aliases": [ 00:08:53.884 "dc87314b-3ec0-11ef-b9c4-5b09e08d4792" 00:08:53.884 ], 00:08:53.884 "product_name": "Malloc disk", 00:08:53.884 "block_size": 512, 00:08:53.884 "num_blocks": 65536, 00:08:53.884 "uuid": "dc87314b-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "assigned_rate_limits": { 00:08:53.884 "rw_ios_per_sec": 0, 00:08:53.884 "rw_mbytes_per_sec": 0, 00:08:53.884 "r_mbytes_per_sec": 0, 00:08:53.884 "w_mbytes_per_sec": 0 00:08:53.884 }, 00:08:53.884 "claimed": true, 00:08:53.884 "claim_type": "exclusive_write", 00:08:53.884 "zoned": false, 00:08:53.884 "supported_io_types": { 00:08:53.884 "read": true, 00:08:53.884 "write": true, 00:08:53.884 "unmap": true, 00:08:53.884 "write_zeroes": true, 00:08:53.884 "flush": true, 00:08:53.884 "reset": true, 00:08:53.884 "compare": false, 00:08:53.884 "compare_and_write": false, 00:08:53.884 "abort": true, 00:08:53.884 "nvme_admin": false, 00:08:53.884 "nvme_io": false 00:08:53.884 }, 00:08:53.884 "memory_domains": [ 00:08:53.884 { 00:08:53.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.884 "dma_device_type": 2 00:08:53.884 } 00:08:53.884 ], 00:08:53.884 "driver_specific": {} 00:08:53.884 } 00:08:53.884 ] 00:08:53.884 13:32:33 -- common/autotest_common.sh@895 -- # return 0 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:53.884 "name": "Existed_Raid", 00:08:53.884 "uuid": "dc87347f-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "strip_size_kb": 64, 00:08:53.884 "state": "online", 00:08:53.884 "raid_level": "concat", 00:08:53.884 "superblock": false, 00:08:53.884 "num_base_bdevs": 4, 00:08:53.884 "num_base_bdevs_discovered": 4, 00:08:53.884 "num_base_bdevs_operational": 4, 00:08:53.884 "base_bdevs_list": [ 00:08:53.884 { 00:08:53.884 "name": "BaseBdev1", 00:08:53.884 "uuid": "da2de904-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "is_configured": true, 00:08:53.884 "data_offset": 0, 00:08:53.884 "data_size": 65536 00:08:53.884 }, 00:08:53.884 { 00:08:53.884 "name": "BaseBdev2", 00:08:53.884 "uuid": "db475370-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "is_configured": true, 00:08:53.884 "data_offset": 0, 00:08:53.884 "data_size": 65536 00:08:53.884 }, 00:08:53.884 { 00:08:53.884 "name": "BaseBdev3", 00:08:53.884 "uuid": "dbe9b3bd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "is_configured": true, 00:08:53.884 "data_offset": 0, 00:08:53.884 "data_size": 65536 00:08:53.884 }, 00:08:53.884 { 00:08:53.884 "name": "BaseBdev4", 00:08:53.884 "uuid": "dc87314b-3ec0-11ef-b9c4-5b09e08d4792", 00:08:53.884 "is_configured": true, 00:08:53.884 "data_offset": 0, 00:08:53.884 "data_size": 65536 00:08:53.884 } 00:08:53.884 ] 00:08:53.884 }' 00:08:53.884 13:32:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:53.884 13:32:33 -- common/autotest_common.sh@10 -- # set +x 00:08:54.143 13:32:33 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:54.403 [2024-07-10 13:32:33.674672] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.403 [2024-07-10 13:32:33.674695] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.403 [2024-07-10 13:32:33.674704] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.403 13:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.663 13:32:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:54.663 "name": "Existed_Raid", 00:08:54.663 "uuid": "dc87347f-3ec0-11ef-b9c4-5b09e08d4792", 00:08:54.663 "strip_size_kb": 64, 00:08:54.663 "state": "offline", 00:08:54.663 "raid_level": "concat", 00:08:54.663 "superblock": false, 00:08:54.663 "num_base_bdevs": 4, 00:08:54.663 "num_base_bdevs_discovered": 3, 00:08:54.663 "num_base_bdevs_operational": 3, 00:08:54.663 "base_bdevs_list": [ 00:08:54.663 { 00:08:54.663 "name": null, 00:08:54.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.663 "is_configured": false, 00:08:54.663 "data_offset": 0, 00:08:54.663 "data_size": 65536 00:08:54.663 }, 00:08:54.663 { 00:08:54.663 "name": "BaseBdev2", 00:08:54.663 "uuid": "db475370-3ec0-11ef-b9c4-5b09e08d4792", 00:08:54.663 "is_configured": true, 00:08:54.663 "data_offset": 0, 00:08:54.663 "data_size": 65536 00:08:54.663 }, 00:08:54.663 { 00:08:54.663 "name": "BaseBdev3", 00:08:54.663 "uuid": "dbe9b3bd-3ec0-11ef-b9c4-5b09e08d4792", 00:08:54.663 "is_configured": true, 00:08:54.663 "data_offset": 0, 00:08:54.663 "data_size": 65536 00:08:54.663 }, 00:08:54.663 { 00:08:54.663 "name": "BaseBdev4", 00:08:54.663 "uuid": "dc87314b-3ec0-11ef-b9c4-5b09e08d4792", 00:08:54.663 "is_configured": true, 00:08:54.663 "data_offset": 0, 00:08:54.663 "data_size": 65536 00:08:54.663 } 00:08:54.663 ] 00:08:54.663 }' 00:08:54.663 13:32:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:54.663 13:32:33 -- common/autotest_common.sh@10 -- # set +x 00:08:54.922 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:54.922 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:54.922 13:32:34 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.922 13:32:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:55.183 [2024-07-10 13:32:34.507499] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.183 13:32:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.443 13:32:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.443 13:32:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.443 13:32:34 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:55.703 [2024-07-10 13:32:34.880201] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.703 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:55.703 13:32:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:55.703 13:32:34 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.703 13:32:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.962 13:32:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.962 13:32:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.962 13:32:35 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:08:55.963 [2024-07-10 13:32:35.316977] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:55.963 [2024-07-10 13:32:35.317000] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a99ba00 name Existed_Raid, state offline 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:56.222 13:32:35 -- bdev/bdev_raid.sh@287 -- # killprocess 52158 00:08:56.222 13:32:35 -- common/autotest_common.sh@926 -- # '[' -z 52158 ']' 00:08:56.222 13:32:35 -- common/autotest_common.sh@930 -- # kill -0 52158 00:08:56.222 13:32:35 -- common/autotest_common.sh@931 -- # uname 00:08:56.222 13:32:35 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:56.222 13:32:35 -- common/autotest_common.sh@934 -- # tail -1 00:08:56.222 13:32:35 -- common/autotest_common.sh@934 -- # ps -c -o command 52158 00:08:56.223 13:32:35 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:56.223 13:32:35 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:56.223 killing process with pid 52158 00:08:56.223 13:32:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52158' 00:08:56.223 13:32:35 -- common/autotest_common.sh@945 -- # kill 52158 00:08:56.223 [2024-07-10 13:32:35.533746] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.223 [2024-07-10 13:32:35.533778] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.223 13:32:35 -- common/autotest_common.sh@950 -- # wait 52158 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:56.483 00:08:56.483 real 0m9.113s 00:08:56.483 user 0m15.778s 00:08:56.483 sys 0m1.757s 00:08:56.483 13:32:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.483 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 ************************************ 00:08:56.483 END TEST raid_state_function_test 00:08:56.483 ************************************ 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:08:56.483 13:32:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:56.483 13:32:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.483 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 ************************************ 00:08:56.483 START TEST raid_state_function_test_sb 00:08:56.483 ************************************ 00:08:56.483 13:32:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=52428 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52428' 00:08:56.483 Process raid pid: 52428 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:56.483 13:32:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52428 /var/tmp/spdk-raid.sock 00:08:56.483 13:32:35 -- common/autotest_common.sh@819 -- # '[' -z 52428 ']' 00:08:56.483 13:32:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:56.483 13:32:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:56.483 13:32:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:56.483 13:32:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.483 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 [2024-07-10 13:32:35.749958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:56.483 [2024-07-10 13:32:35.750230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:57.053 EAL: TSC is not safe to use in SMP mode 00:08:57.053 EAL: TSC is not invariant 00:08:57.053 [2024-07-10 13:32:36.181272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.053 [2024-07-10 13:32:36.269911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.053 [2024-07-10 13:32:36.270328] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.053 [2024-07-10 13:32:36.270337] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.312 13:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.313 13:32:36 -- common/autotest_common.sh@852 -- # return 0 00:08:57.313 13:32:36 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:57.572 [2024-07-10 13:32:36.833331] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.572 [2024-07-10 13:32:36.833380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.572 [2024-07-10 13:32:36.833384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.572 [2024-07-10 13:32:36.833390] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.572 [2024-07-10 13:32:36.833393] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.572 [2024-07-10 13:32:36.833399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.572 [2024-07-10 13:32:36.833401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:57.573 [2024-07-10 13:32:36.833423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.573 13:32:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.833 13:32:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:57.833 "name": "Existed_Raid", 00:08:57.833 "uuid": "df0919c5-3ec0-11ef-b9c4-5b09e08d4792", 00:08:57.833 "strip_size_kb": 64, 00:08:57.833 "state": "configuring", 00:08:57.833 "raid_level": "concat", 00:08:57.833 "superblock": true, 00:08:57.833 "num_base_bdevs": 4, 00:08:57.833 "num_base_bdevs_discovered": 0, 00:08:57.833 "num_base_bdevs_operational": 4, 00:08:57.833 "base_bdevs_list": [ 00:08:57.833 { 00:08:57.833 "name": "BaseBdev1", 00:08:57.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.833 "is_configured": false, 00:08:57.833 "data_offset": 0, 00:08:57.833 "data_size": 0 00:08:57.833 }, 00:08:57.833 { 00:08:57.833 "name": "BaseBdev2", 00:08:57.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.833 "is_configured": false, 00:08:57.833 "data_offset": 0, 00:08:57.833 "data_size": 0 00:08:57.833 }, 00:08:57.833 { 00:08:57.833 "name": "BaseBdev3", 00:08:57.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.833 "is_configured": false, 00:08:57.833 "data_offset": 0, 00:08:57.833 "data_size": 0 00:08:57.833 }, 00:08:57.833 { 00:08:57.833 "name": "BaseBdev4", 00:08:57.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.833 "is_configured": false, 00:08:57.833 "data_offset": 0, 00:08:57.833 "data_size": 0 00:08:57.833 } 00:08:57.833 ] 00:08:57.833 }' 00:08:57.833 13:32:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:57.833 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.099 13:32:37 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:58.367 [2024-07-10 13:32:37.485396] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.367 [2024-07-10 13:32:37.485421] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d053500 name Existed_Raid, state configuring 00:08:58.367 13:32:37 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:58.367 [2024-07-10 13:32:37.681416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.367 [2024-07-10 13:32:37.681453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.367 [2024-07-10 13:32:37.681456] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.367 [2024-07-10 13:32:37.681461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.367 [2024-07-10 13:32:37.681463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.367 [2024-07-10 13:32:37.681469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.367 [2024-07-10 13:32:37.681471] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:58.367 [2024-07-10 13:32:37.681476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:58.367 13:32:37 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.627 [2024-07-10 13:32:37.854226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.627 BaseBdev1 00:08:58.627 13:32:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:58.627 13:32:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:58.627 13:32:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:58.627 13:32:37 -- common/autotest_common.sh@889 -- # local i 00:08:58.627 13:32:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:58.627 13:32:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:58.627 13:32:37 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:58.887 13:32:38 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.147 [ 00:08:59.147 { 00:08:59.147 "name": "BaseBdev1", 00:08:59.147 "aliases": [ 00:08:59.147 "dfa4c1ef-3ec0-11ef-b9c4-5b09e08d4792" 00:08:59.147 ], 00:08:59.147 "product_name": "Malloc disk", 00:08:59.147 "block_size": 512, 00:08:59.147 "num_blocks": 65536, 00:08:59.147 "uuid": "dfa4c1ef-3ec0-11ef-b9c4-5b09e08d4792", 00:08:59.147 "assigned_rate_limits": { 00:08:59.147 "rw_ios_per_sec": 0, 00:08:59.147 "rw_mbytes_per_sec": 0, 00:08:59.147 "r_mbytes_per_sec": 0, 00:08:59.147 "w_mbytes_per_sec": 0 00:08:59.147 }, 00:08:59.147 "claimed": true, 00:08:59.147 "claim_type": "exclusive_write", 00:08:59.147 "zoned": false, 00:08:59.147 "supported_io_types": { 00:08:59.147 "read": true, 00:08:59.147 "write": true, 00:08:59.147 "unmap": true, 00:08:59.147 "write_zeroes": true, 00:08:59.147 "flush": true, 00:08:59.147 "reset": true, 00:08:59.147 "compare": false, 00:08:59.147 "compare_and_write": false, 00:08:59.147 "abort": true, 00:08:59.147 "nvme_admin": false, 00:08:59.147 "nvme_io": false 00:08:59.147 }, 00:08:59.147 "memory_domains": [ 00:08:59.147 { 00:08:59.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.147 "dma_device_type": 2 00:08:59.147 } 00:08:59.147 ], 00:08:59.147 "driver_specific": {} 00:08:59.147 } 00:08:59.147 ] 00:08:59.147 13:32:38 -- common/autotest_common.sh@895 -- # return 0 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.147 13:32:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:59.147 "name": "Existed_Raid", 00:08:59.147 "uuid": "df8a8241-3ec0-11ef-b9c4-5b09e08d4792", 00:08:59.147 "strip_size_kb": 64, 00:08:59.147 "state": "configuring", 00:08:59.147 "raid_level": "concat", 00:08:59.147 "superblock": true, 00:08:59.147 "num_base_bdevs": 4, 00:08:59.147 "num_base_bdevs_discovered": 1, 00:08:59.147 "num_base_bdevs_operational": 4, 00:08:59.147 "base_bdevs_list": [ 00:08:59.147 { 00:08:59.147 "name": "BaseBdev1", 00:08:59.147 "uuid": "dfa4c1ef-3ec0-11ef-b9c4-5b09e08d4792", 00:08:59.147 "is_configured": true, 00:08:59.147 "data_offset": 2048, 00:08:59.147 "data_size": 63488 00:08:59.147 }, 00:08:59.147 { 00:08:59.147 "name": "BaseBdev2", 00:08:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.147 "is_configured": false, 00:08:59.147 "data_offset": 0, 00:08:59.147 "data_size": 0 00:08:59.147 }, 00:08:59.147 { 00:08:59.147 "name": "BaseBdev3", 00:08:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.147 "is_configured": false, 00:08:59.147 "data_offset": 0, 00:08:59.147 "data_size": 0 00:08:59.147 }, 00:08:59.147 { 00:08:59.148 "name": "BaseBdev4", 00:08:59.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.148 "is_configured": false, 00:08:59.148 "data_offset": 0, 00:08:59.148 "data_size": 0 00:08:59.148 } 00:08:59.148 ] 00:08:59.148 }' 00:08:59.148 13:32:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:59.148 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:08:59.408 13:32:38 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:59.668 [2024-07-10 13:32:38.933563] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.668 [2024-07-10 13:32:38.933588] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d053500 name Existed_Raid, state configuring 00:08:59.668 13:32:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:59.668 13:32:38 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:59.927 13:32:39 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.927 BaseBdev1 00:08:59.927 13:32:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:59.927 13:32:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:59.927 13:32:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:59.927 13:32:39 -- common/autotest_common.sh@889 -- # local i 00:08:59.927 13:32:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:59.927 13:32:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:59.928 13:32:39 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:00.187 13:32:39 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.447 [ 00:09:00.447 { 00:09:00.447 "name": "BaseBdev1", 00:09:00.447 "aliases": [ 00:09:00.447 "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792" 00:09:00.447 ], 00:09:00.447 "product_name": "Malloc disk", 00:09:00.447 "block_size": 512, 00:09:00.447 "num_blocks": 65536, 00:09:00.447 "uuid": "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:00.447 "assigned_rate_limits": { 00:09:00.447 "rw_ios_per_sec": 0, 00:09:00.447 "rw_mbytes_per_sec": 0, 00:09:00.447 "r_mbytes_per_sec": 0, 00:09:00.447 "w_mbytes_per_sec": 0 00:09:00.447 }, 00:09:00.447 "claimed": false, 00:09:00.447 "zoned": false, 00:09:00.447 "supported_io_types": { 00:09:00.447 "read": true, 00:09:00.447 "write": true, 00:09:00.447 "unmap": true, 00:09:00.447 "write_zeroes": true, 00:09:00.447 "flush": true, 00:09:00.447 "reset": true, 00:09:00.447 "compare": false, 00:09:00.447 "compare_and_write": false, 00:09:00.447 "abort": true, 00:09:00.447 "nvme_admin": false, 00:09:00.447 "nvme_io": false 00:09:00.447 }, 00:09:00.447 "memory_domains": [ 00:09:00.447 { 00:09:00.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.447 "dma_device_type": 2 00:09:00.447 } 00:09:00.447 ], 00:09:00.447 "driver_specific": {} 00:09:00.447 } 00:09:00.447 ] 00:09:00.447 13:32:39 -- common/autotest_common.sh@895 -- # return 0 00:09:00.447 13:32:39 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:00.707 [2024-07-10 13:32:39.822286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.707 [2024-07-10 13:32:39.822694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.707 [2024-07-10 13:32:39.822732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.707 [2024-07-10 13:32:39.822736] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.707 [2024-07-10 13:32:39.822742] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.707 [2024-07-10 13:32:39.822745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:00.707 [2024-07-10 13:32:39.822750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.707 13:32:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.707 13:32:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:00.707 "name": "Existed_Raid", 00:09:00.707 "uuid": "e0d12dec-3ec0-11ef-b9c4-5b09e08d4792", 00:09:00.707 "strip_size_kb": 64, 00:09:00.707 "state": "configuring", 00:09:00.707 "raid_level": "concat", 00:09:00.707 "superblock": true, 00:09:00.707 "num_base_bdevs": 4, 00:09:00.707 "num_base_bdevs_discovered": 1, 00:09:00.707 "num_base_bdevs_operational": 4, 00:09:00.707 "base_bdevs_list": [ 00:09:00.707 { 00:09:00.707 "name": "BaseBdev1", 00:09:00.707 "uuid": "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:00.707 "is_configured": true, 00:09:00.707 "data_offset": 2048, 00:09:00.707 "data_size": 63488 00:09:00.707 }, 00:09:00.707 { 00:09:00.707 "name": "BaseBdev2", 00:09:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.707 "is_configured": false, 00:09:00.707 "data_offset": 0, 00:09:00.707 "data_size": 0 00:09:00.707 }, 00:09:00.707 { 00:09:00.707 "name": "BaseBdev3", 00:09:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.707 "is_configured": false, 00:09:00.707 "data_offset": 0, 00:09:00.707 "data_size": 0 00:09:00.707 }, 00:09:00.707 { 00:09:00.707 "name": "BaseBdev4", 00:09:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.707 "is_configured": false, 00:09:00.707 "data_offset": 0, 00:09:00.707 "data_size": 0 00:09:00.707 } 00:09:00.707 ] 00:09:00.707 }' 00:09:00.707 13:32:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:00.707 13:32:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.967 13:32:40 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.227 [2024-07-10 13:32:40.466454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.227 BaseBdev2 00:09:01.227 13:32:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:01.227 13:32:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:01.227 13:32:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:01.227 13:32:40 -- common/autotest_common.sh@889 -- # local i 00:09:01.227 13:32:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:01.227 13:32:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:01.227 13:32:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:01.488 13:32:40 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.488 [ 00:09:01.488 { 00:09:01.488 "name": "BaseBdev2", 00:09:01.488 "aliases": [ 00:09:01.488 "e1337563-3ec0-11ef-b9c4-5b09e08d4792" 00:09:01.488 ], 00:09:01.488 "product_name": "Malloc disk", 00:09:01.488 "block_size": 512, 00:09:01.488 "num_blocks": 65536, 00:09:01.488 "uuid": "e1337563-3ec0-11ef-b9c4-5b09e08d4792", 00:09:01.488 "assigned_rate_limits": { 00:09:01.488 "rw_ios_per_sec": 0, 00:09:01.488 "rw_mbytes_per_sec": 0, 00:09:01.488 "r_mbytes_per_sec": 0, 00:09:01.488 "w_mbytes_per_sec": 0 00:09:01.488 }, 00:09:01.488 "claimed": true, 00:09:01.488 "claim_type": "exclusive_write", 00:09:01.488 "zoned": false, 00:09:01.488 "supported_io_types": { 00:09:01.488 "read": true, 00:09:01.488 "write": true, 00:09:01.488 "unmap": true, 00:09:01.488 "write_zeroes": true, 00:09:01.488 "flush": true, 00:09:01.488 "reset": true, 00:09:01.488 "compare": false, 00:09:01.488 "compare_and_write": false, 00:09:01.488 "abort": true, 00:09:01.488 "nvme_admin": false, 00:09:01.488 "nvme_io": false 00:09:01.488 }, 00:09:01.488 "memory_domains": [ 00:09:01.488 { 00:09:01.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.488 "dma_device_type": 2 00:09:01.488 } 00:09:01.488 ], 00:09:01.488 "driver_specific": {} 00:09:01.488 } 00:09:01.488 ] 00:09:01.488 13:32:40 -- common/autotest_common.sh@895 -- # return 0 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.488 13:32:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.748 13:32:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:01.748 "name": "Existed_Raid", 00:09:01.748 "uuid": "e0d12dec-3ec0-11ef-b9c4-5b09e08d4792", 00:09:01.748 "strip_size_kb": 64, 00:09:01.748 "state": "configuring", 00:09:01.748 "raid_level": "concat", 00:09:01.748 "superblock": true, 00:09:01.748 "num_base_bdevs": 4, 00:09:01.748 "num_base_bdevs_discovered": 2, 00:09:01.748 "num_base_bdevs_operational": 4, 00:09:01.748 "base_bdevs_list": [ 00:09:01.748 { 00:09:01.748 "name": "BaseBdev1", 00:09:01.748 "uuid": "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:01.748 "is_configured": true, 00:09:01.748 "data_offset": 2048, 00:09:01.748 "data_size": 63488 00:09:01.748 }, 00:09:01.748 { 00:09:01.748 "name": "BaseBdev2", 00:09:01.748 "uuid": "e1337563-3ec0-11ef-b9c4-5b09e08d4792", 00:09:01.748 "is_configured": true, 00:09:01.748 "data_offset": 2048, 00:09:01.748 "data_size": 63488 00:09:01.748 }, 00:09:01.748 { 00:09:01.748 "name": "BaseBdev3", 00:09:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.748 "is_configured": false, 00:09:01.748 "data_offset": 0, 00:09:01.748 "data_size": 0 00:09:01.748 }, 00:09:01.748 { 00:09:01.748 "name": "BaseBdev4", 00:09:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.748 "is_configured": false, 00:09:01.748 "data_offset": 0, 00:09:01.749 "data_size": 0 00:09:01.749 } 00:09:01.749 ] 00:09:01.749 }' 00:09:01.749 13:32:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:01.749 13:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:02.008 13:32:41 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.271 [2024-07-10 13:32:41.466540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.271 BaseBdev3 00:09:02.271 13:32:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:02.271 13:32:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:02.271 13:32:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:02.271 13:32:41 -- common/autotest_common.sh@889 -- # local i 00:09:02.271 13:32:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:02.271 13:32:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:02.271 13:32:41 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:02.531 13:32:41 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.531 [ 00:09:02.531 { 00:09:02.531 "name": "BaseBdev3", 00:09:02.531 "aliases": [ 00:09:02.531 "e1cc1067-3ec0-11ef-b9c4-5b09e08d4792" 00:09:02.531 ], 00:09:02.531 "product_name": "Malloc disk", 00:09:02.531 "block_size": 512, 00:09:02.531 "num_blocks": 65536, 00:09:02.531 "uuid": "e1cc1067-3ec0-11ef-b9c4-5b09e08d4792", 00:09:02.531 "assigned_rate_limits": { 00:09:02.531 "rw_ios_per_sec": 0, 00:09:02.531 "rw_mbytes_per_sec": 0, 00:09:02.531 "r_mbytes_per_sec": 0, 00:09:02.531 "w_mbytes_per_sec": 0 00:09:02.531 }, 00:09:02.531 "claimed": true, 00:09:02.531 "claim_type": "exclusive_write", 00:09:02.531 "zoned": false, 00:09:02.531 "supported_io_types": { 00:09:02.531 "read": true, 00:09:02.531 "write": true, 00:09:02.531 "unmap": true, 00:09:02.531 "write_zeroes": true, 00:09:02.531 "flush": true, 00:09:02.531 "reset": true, 00:09:02.531 "compare": false, 00:09:02.531 "compare_and_write": false, 00:09:02.531 "abort": true, 00:09:02.531 "nvme_admin": false, 00:09:02.531 "nvme_io": false 00:09:02.531 }, 00:09:02.531 "memory_domains": [ 00:09:02.531 { 00:09:02.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.531 "dma_device_type": 2 00:09:02.531 } 00:09:02.531 ], 00:09:02.531 "driver_specific": {} 00:09:02.531 } 00:09:02.531 ] 00:09:02.531 13:32:41 -- common/autotest_common.sh@895 -- # return 0 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.531 13:32:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.789 13:32:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:02.789 "name": "Existed_Raid", 00:09:02.789 "uuid": "e0d12dec-3ec0-11ef-b9c4-5b09e08d4792", 00:09:02.789 "strip_size_kb": 64, 00:09:02.789 "state": "configuring", 00:09:02.789 "raid_level": "concat", 00:09:02.789 "superblock": true, 00:09:02.789 "num_base_bdevs": 4, 00:09:02.789 "num_base_bdevs_discovered": 3, 00:09:02.789 "num_base_bdevs_operational": 4, 00:09:02.789 "base_bdevs_list": [ 00:09:02.789 { 00:09:02.789 "name": "BaseBdev1", 00:09:02.789 "uuid": "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:02.789 "is_configured": true, 00:09:02.789 "data_offset": 2048, 00:09:02.789 "data_size": 63488 00:09:02.789 }, 00:09:02.789 { 00:09:02.789 "name": "BaseBdev2", 00:09:02.789 "uuid": "e1337563-3ec0-11ef-b9c4-5b09e08d4792", 00:09:02.789 "is_configured": true, 00:09:02.789 "data_offset": 2048, 00:09:02.789 "data_size": 63488 00:09:02.789 }, 00:09:02.789 { 00:09:02.789 "name": "BaseBdev3", 00:09:02.789 "uuid": "e1cc1067-3ec0-11ef-b9c4-5b09e08d4792", 00:09:02.789 "is_configured": true, 00:09:02.789 "data_offset": 2048, 00:09:02.789 "data_size": 63488 00:09:02.789 }, 00:09:02.789 { 00:09:02.789 "name": "BaseBdev4", 00:09:02.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.789 "is_configured": false, 00:09:02.789 "data_offset": 0, 00:09:02.789 "data_size": 0 00:09:02.789 } 00:09:02.789 ] 00:09:02.789 }' 00:09:02.789 13:32:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:02.789 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:09:03.047 13:32:42 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:03.307 [2024-07-10 13:32:42.498691] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:03.307 [2024-07-10 13:32:42.498749] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d053a00 00:09:03.307 [2024-07-10 13:32:42.498753] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:03.307 [2024-07-10 13:32:42.498770] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d0b6ec0 00:09:03.307 [2024-07-10 13:32:42.498804] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d053a00 00:09:03.307 [2024-07-10 13:32:42.498807] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d053a00 00:09:03.307 [2024-07-10 13:32:42.498820] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.307 BaseBdev4 00:09:03.307 13:32:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:03.307 13:32:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:03.307 13:32:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:03.307 13:32:42 -- common/autotest_common.sh@889 -- # local i 00:09:03.307 13:32:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:03.307 13:32:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:03.307 13:32:42 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:03.662 13:32:42 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:03.662 [ 00:09:03.662 { 00:09:03.662 "name": "BaseBdev4", 00:09:03.662 "aliases": [ 00:09:03.662 "e2698e51-3ec0-11ef-b9c4-5b09e08d4792" 00:09:03.662 ], 00:09:03.662 "product_name": "Malloc disk", 00:09:03.662 "block_size": 512, 00:09:03.662 "num_blocks": 65536, 00:09:03.662 "uuid": "e2698e51-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.662 "assigned_rate_limits": { 00:09:03.662 "rw_ios_per_sec": 0, 00:09:03.662 "rw_mbytes_per_sec": 0, 00:09:03.662 "r_mbytes_per_sec": 0, 00:09:03.662 "w_mbytes_per_sec": 0 00:09:03.662 }, 00:09:03.662 "claimed": true, 00:09:03.662 "claim_type": "exclusive_write", 00:09:03.662 "zoned": false, 00:09:03.662 "supported_io_types": { 00:09:03.662 "read": true, 00:09:03.662 "write": true, 00:09:03.662 "unmap": true, 00:09:03.662 "write_zeroes": true, 00:09:03.662 "flush": true, 00:09:03.662 "reset": true, 00:09:03.662 "compare": false, 00:09:03.662 "compare_and_write": false, 00:09:03.662 "abort": true, 00:09:03.662 "nvme_admin": false, 00:09:03.662 "nvme_io": false 00:09:03.662 }, 00:09:03.662 "memory_domains": [ 00:09:03.662 { 00:09:03.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.662 "dma_device_type": 2 00:09:03.662 } 00:09:03.662 ], 00:09:03.662 "driver_specific": {} 00:09:03.662 } 00:09:03.662 ] 00:09:03.662 13:32:42 -- common/autotest_common.sh@895 -- # return 0 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.662 13:32:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.922 13:32:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:03.922 "name": "Existed_Raid", 00:09:03.922 "uuid": "e0d12dec-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.922 "strip_size_kb": 64, 00:09:03.922 "state": "online", 00:09:03.922 "raid_level": "concat", 00:09:03.922 "superblock": true, 00:09:03.922 "num_base_bdevs": 4, 00:09:03.922 "num_base_bdevs_discovered": 4, 00:09:03.922 "num_base_bdevs_operational": 4, 00:09:03.922 "base_bdevs_list": [ 00:09:03.922 { 00:09:03.922 "name": "BaseBdev1", 00:09:03.922 "uuid": "e07d8d2a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.922 "is_configured": true, 00:09:03.922 "data_offset": 2048, 00:09:03.922 "data_size": 63488 00:09:03.922 }, 00:09:03.922 { 00:09:03.922 "name": "BaseBdev2", 00:09:03.922 "uuid": "e1337563-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.922 "is_configured": true, 00:09:03.922 "data_offset": 2048, 00:09:03.922 "data_size": 63488 00:09:03.922 }, 00:09:03.922 { 00:09:03.922 "name": "BaseBdev3", 00:09:03.922 "uuid": "e1cc1067-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.922 "is_configured": true, 00:09:03.922 "data_offset": 2048, 00:09:03.922 "data_size": 63488 00:09:03.922 }, 00:09:03.922 { 00:09:03.922 "name": "BaseBdev4", 00:09:03.922 "uuid": "e2698e51-3ec0-11ef-b9c4-5b09e08d4792", 00:09:03.922 "is_configured": true, 00:09:03.922 "data_offset": 2048, 00:09:03.922 "data_size": 63488 00:09:03.922 } 00:09:03.922 ] 00:09:03.922 }' 00:09:03.922 13:32:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:03.922 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:04.181 13:32:43 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:04.182 [2024-07-10 13:32:43.550765] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.182 [2024-07-10 13:32:43.550787] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.182 [2024-07-10 13:32:43.550796] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:04.442 "name": "Existed_Raid", 00:09:04.442 "uuid": "e0d12dec-3ec0-11ef-b9c4-5b09e08d4792", 00:09:04.442 "strip_size_kb": 64, 00:09:04.442 "state": "offline", 00:09:04.442 "raid_level": "concat", 00:09:04.442 "superblock": true, 00:09:04.442 "num_base_bdevs": 4, 00:09:04.442 "num_base_bdevs_discovered": 3, 00:09:04.442 "num_base_bdevs_operational": 3, 00:09:04.442 "base_bdevs_list": [ 00:09:04.442 { 00:09:04.442 "name": null, 00:09:04.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.442 "is_configured": false, 00:09:04.442 "data_offset": 2048, 00:09:04.442 "data_size": 63488 00:09:04.442 }, 00:09:04.442 { 00:09:04.442 "name": "BaseBdev2", 00:09:04.442 "uuid": "e1337563-3ec0-11ef-b9c4-5b09e08d4792", 00:09:04.442 "is_configured": true, 00:09:04.442 "data_offset": 2048, 00:09:04.442 "data_size": 63488 00:09:04.442 }, 00:09:04.442 { 00:09:04.442 "name": "BaseBdev3", 00:09:04.442 "uuid": "e1cc1067-3ec0-11ef-b9c4-5b09e08d4792", 00:09:04.442 "is_configured": true, 00:09:04.442 "data_offset": 2048, 00:09:04.442 "data_size": 63488 00:09:04.442 }, 00:09:04.442 { 00:09:04.442 "name": "BaseBdev4", 00:09:04.442 "uuid": "e2698e51-3ec0-11ef-b9c4-5b09e08d4792", 00:09:04.442 "is_configured": true, 00:09:04.442 "data_offset": 2048, 00:09:04.442 "data_size": 63488 00:09:04.442 } 00:09:04.442 ] 00:09:04.442 }' 00:09:04.442 13:32:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:04.442 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:04.702 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:04.702 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:04.702 13:32:44 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.702 13:32:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:04.962 13:32:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:04.962 13:32:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.962 13:32:44 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:05.221 [2024-07-10 13:32:44.435596] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.221 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:05.221 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:05.221 13:32:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:05.221 13:32:44 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:05.481 [2024-07-10 13:32:44.824318] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.481 13:32:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:05.741 13:32:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:05.741 13:32:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.741 13:32:45 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:06.000 [2024-07-10 13:32:45.205086] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:06.000 [2024-07-10 13:32:45.205111] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d053a00 name Existed_Raid, state offline 00:09:06.000 13:32:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:06.000 13:32:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:06.000 13:32:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.000 13:32:45 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@287 -- # killprocess 52428 00:09:06.261 13:32:45 -- common/autotest_common.sh@926 -- # '[' -z 52428 ']' 00:09:06.261 13:32:45 -- common/autotest_common.sh@930 -- # kill -0 52428 00:09:06.261 13:32:45 -- common/autotest_common.sh@931 -- # uname 00:09:06.261 13:32:45 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:06.261 13:32:45 -- common/autotest_common.sh@934 -- # tail -1 00:09:06.261 13:32:45 -- common/autotest_common.sh@934 -- # ps -c -o command 52428 00:09:06.261 13:32:45 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:06.261 13:32:45 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:06.261 killing process with pid 52428 00:09:06.261 13:32:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52428' 00:09:06.261 13:32:45 -- common/autotest_common.sh@945 -- # kill 52428 00:09:06.261 [2024-07-10 13:32:45.428493] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.261 [2024-07-10 13:32:45.428526] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.261 13:32:45 -- common/autotest_common.sh@950 -- # wait 52428 00:09:06.261 ************************************ 00:09:06.261 END TEST raid_state_function_test_sb 00:09:06.261 ************************************ 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:06.261 00:09:06.261 real 0m9.842s 00:09:06.261 user 0m17.174s 00:09:06.261 sys 0m1.809s 00:09:06.261 13:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.261 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:06.261 13:32:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.261 13:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.261 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:06.261 ************************************ 00:09:06.261 START TEST raid_superblock_test 00:09:06.261 ************************************ 00:09:06.261 13:32:45 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=52701 00:09:06.261 13:32:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 52701 /var/tmp/spdk-raid.sock 00:09:06.261 13:32:45 -- common/autotest_common.sh@819 -- # '[' -z 52701 ']' 00:09:06.261 13:32:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:06.261 13:32:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:06.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:06.261 13:32:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:06.261 13:32:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:06.261 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:06.261 [2024-07-10 13:32:45.627100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:06.261 [2024-07-10 13:32:45.627263] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:06.831 EAL: TSC is not safe to use in SMP mode 00:09:06.831 EAL: TSC is not invariant 00:09:06.831 [2024-07-10 13:32:46.098938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.831 [2024-07-10 13:32:46.180050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.831 [2024-07-10 13:32:46.180477] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.831 [2024-07-10 13:32:46.180490] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.397 13:32:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:07.397 13:32:46 -- common/autotest_common.sh@852 -- # return 0 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:07.397 malloc1 00:09:07.397 13:32:46 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.655 [2024-07-10 13:32:46.871636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.655 [2024-07-10 13:32:46.871704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.655 [2024-07-10 13:32:46.872199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88a780 00:09:07.655 [2024-07-10 13:32:46.872221] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.655 [2024-07-10 13:32:46.872881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.655 [2024-07-10 13:32:46.872911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.655 pt1 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.655 13:32:46 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:07.913 malloc2 00:09:07.913 13:32:47 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.913 [2024-07-10 13:32:47.243692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.913 [2024-07-10 13:32:47.243737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.913 [2024-07-10 13:32:47.243759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88ac80 00:09:07.913 [2024-07-10 13:32:47.243765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.914 [2024-07-10 13:32:47.244201] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.914 [2024-07-10 13:32:47.244230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.914 pt2 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.914 13:32:47 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:08.171 malloc3 00:09:08.171 13:32:47 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.429 [2024-07-10 13:32:47.607739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.429 [2024-07-10 13:32:47.607785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.429 [2024-07-10 13:32:47.607804] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88b180 00:09:08.429 [2024-07-10 13:32:47.607809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.429 [2024-07-10 13:32:47.608234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.429 [2024-07-10 13:32:47.608260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.429 pt3 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.429 13:32:47 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:08.687 malloc4 00:09:08.687 13:32:47 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:08.687 [2024-07-10 13:32:47.987791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:08.687 [2024-07-10 13:32:47.987852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.687 [2024-07-10 13:32:47.987871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88b680 00:09:08.687 [2024-07-10 13:32:47.987877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.687 [2024-07-10 13:32:47.988301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.687 [2024-07-10 13:32:47.988326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:08.687 pt4 00:09:08.687 13:32:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:08.687 13:32:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:08.687 13:32:48 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:08.946 [2024-07-10 13:32:48.179829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.946 [2024-07-10 13:32:48.180249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.946 [2024-07-10 13:32:48.180268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.946 [2024-07-10 13:32:48.180275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:08.946 [2024-07-10 13:32:48.180321] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d88b900 00:09:08.946 [2024-07-10 13:32:48.180327] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:08.947 [2024-07-10 13:32:48.180356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d8ede20 00:09:08.947 [2024-07-10 13:32:48.180423] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d88b900 00:09:08.947 [2024-07-10 13:32:48.180426] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d88b900 00:09:08.947 [2024-07-10 13:32:48.180444] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.947 13:32:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.205 13:32:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:09.205 "name": "raid_bdev1", 00:09:09.205 "uuid": "e5cc705d-3ec0-11ef-b9c4-5b09e08d4792", 00:09:09.205 "strip_size_kb": 64, 00:09:09.205 "state": "online", 00:09:09.205 "raid_level": "concat", 00:09:09.205 "superblock": true, 00:09:09.205 "num_base_bdevs": 4, 00:09:09.205 "num_base_bdevs_discovered": 4, 00:09:09.205 "num_base_bdevs_operational": 4, 00:09:09.205 "base_bdevs_list": [ 00:09:09.205 { 00:09:09.205 "name": "pt1", 00:09:09.205 "uuid": "658b50e7-7882-4152-9294-0885ed8bfcca", 00:09:09.205 "is_configured": true, 00:09:09.205 "data_offset": 2048, 00:09:09.205 "data_size": 63488 00:09:09.205 }, 00:09:09.205 { 00:09:09.205 "name": "pt2", 00:09:09.205 "uuid": "fdbd8bc4-8761-bc5b-bee2-d7fe523ed325", 00:09:09.205 "is_configured": true, 00:09:09.205 "data_offset": 2048, 00:09:09.205 "data_size": 63488 00:09:09.205 }, 00:09:09.205 { 00:09:09.205 "name": "pt3", 00:09:09.205 "uuid": "13976c45-2d5c-695a-9ca9-3f0c3d055818", 00:09:09.205 "is_configured": true, 00:09:09.205 "data_offset": 2048, 00:09:09.205 "data_size": 63488 00:09:09.205 }, 00:09:09.205 { 00:09:09.205 "name": "pt4", 00:09:09.205 "uuid": "04fa973d-fe71-bb53-95a9-fb37bbf6abbe", 00:09:09.205 "is_configured": true, 00:09:09.205 "data_offset": 2048, 00:09:09.205 "data_size": 63488 00:09:09.205 } 00:09:09.205 ] 00:09:09.205 }' 00:09:09.205 13:32:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:09.205 13:32:48 -- common/autotest_common.sh@10 -- # set +x 00:09:09.464 13:32:48 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:09.464 13:32:48 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:09.723 [2024-07-10 13:32:48.851946] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.723 13:32:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e5cc705d-3ec0-11ef-b9c4-5b09e08d4792 00:09:09.723 13:32:48 -- bdev/bdev_raid.sh@380 -- # '[' -z e5cc705d-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:09:09.723 13:32:48 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:09.723 [2024-07-10 13:32:49.051946] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.723 [2024-07-10 13:32:49.051967] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.723 [2024-07-10 13:32:49.051979] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.723 [2024-07-10 13:32:49.051988] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.723 [2024-07-10 13:32:49.051992] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d88b900 name raid_bdev1, state offline 00:09:09.723 13:32:49 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.723 13:32:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:09.981 13:32:49 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:09.981 13:32:49 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:09.981 13:32:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.981 13:32:49 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:10.241 13:32:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.241 13:32:49 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:10.500 13:32:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.500 13:32:49 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:10.758 13:32:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.758 13:32:49 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:10.758 13:32:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:10.758 13:32:50 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:11.017 13:32:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:11.017 13:32:50 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:11.017 13:32:50 -- common/autotest_common.sh@640 -- # local es=0 00:09:11.017 13:32:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:11.017 13:32:50 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.017 13:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.017 13:32:50 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.017 13:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.017 13:32:50 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.017 13:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.017 13:32:50 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.017 13:32:50 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.017 13:32:50 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:11.276 [2024-07-10 13:32:50.428136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.276 [2024-07-10 13:32:50.428549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.276 [2024-07-10 13:32:50.428564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.276 [2024-07-10 13:32:50.428570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:11.276 [2024-07-10 13:32:50.428581] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:11.276 [2024-07-10 13:32:50.428613] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:11.276 [2024-07-10 13:32:50.428621] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:11.276 [2024-07-10 13:32:50.428628] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:11.276 [2024-07-10 13:32:50.428659] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.276 [2024-07-10 13:32:50.428663] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d88b680 name raid_bdev1, state configuring 00:09:11.276 request: 00:09:11.276 { 00:09:11.276 "name": "raid_bdev1", 00:09:11.276 "raid_level": "concat", 00:09:11.276 "base_bdevs": [ 00:09:11.276 "malloc1", 00:09:11.276 "malloc2", 00:09:11.276 "malloc3", 00:09:11.276 "malloc4" 00:09:11.276 ], 00:09:11.276 "superblock": false, 00:09:11.276 "strip_size_kb": 64, 00:09:11.276 "method": "bdev_raid_create", 00:09:11.276 "req_id": 1 00:09:11.276 } 00:09:11.276 Got JSON-RPC error response 00:09:11.276 response: 00:09:11.276 { 00:09:11.276 "code": -17, 00:09:11.276 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.276 } 00:09:11.276 13:32:50 -- common/autotest_common.sh@643 -- # es=1 00:09:11.276 13:32:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:11.276 13:32:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:11.276 13:32:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:11.276 13:32:50 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.276 13:32:50 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:11.276 13:32:50 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:11.276 13:32:50 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:11.276 13:32:50 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.535 [2024-07-10 13:32:50.812185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.535 [2024-07-10 13:32:50.812222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.535 [2024-07-10 13:32:50.812245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88b180 00:09:11.535 [2024-07-10 13:32:50.812250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.535 [2024-07-10 13:32:50.812714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.535 [2024-07-10 13:32:50.812739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.535 [2024-07-10 13:32:50.812755] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:11.535 [2024-07-10 13:32:50.812764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.535 pt1 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.535 13:32:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.794 13:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:11.794 "name": "raid_bdev1", 00:09:11.794 "uuid": "e5cc705d-3ec0-11ef-b9c4-5b09e08d4792", 00:09:11.794 "strip_size_kb": 64, 00:09:11.794 "state": "configuring", 00:09:11.794 "raid_level": "concat", 00:09:11.794 "superblock": true, 00:09:11.794 "num_base_bdevs": 4, 00:09:11.794 "num_base_bdevs_discovered": 1, 00:09:11.794 "num_base_bdevs_operational": 4, 00:09:11.794 "base_bdevs_list": [ 00:09:11.794 { 00:09:11.794 "name": "pt1", 00:09:11.794 "uuid": "658b50e7-7882-4152-9294-0885ed8bfcca", 00:09:11.794 "is_configured": true, 00:09:11.794 "data_offset": 2048, 00:09:11.794 "data_size": 63488 00:09:11.794 }, 00:09:11.794 { 00:09:11.794 "name": null, 00:09:11.794 "uuid": "fdbd8bc4-8761-bc5b-bee2-d7fe523ed325", 00:09:11.794 "is_configured": false, 00:09:11.794 "data_offset": 2048, 00:09:11.794 "data_size": 63488 00:09:11.794 }, 00:09:11.794 { 00:09:11.794 "name": null, 00:09:11.794 "uuid": "13976c45-2d5c-695a-9ca9-3f0c3d055818", 00:09:11.794 "is_configured": false, 00:09:11.794 "data_offset": 2048, 00:09:11.794 "data_size": 63488 00:09:11.794 }, 00:09:11.794 { 00:09:11.794 "name": null, 00:09:11.794 "uuid": "04fa973d-fe71-bb53-95a9-fb37bbf6abbe", 00:09:11.794 "is_configured": false, 00:09:11.794 "data_offset": 2048, 00:09:11.794 "data_size": 63488 00:09:11.794 } 00:09:11.794 ] 00:09:11.794 }' 00:09:11.794 13:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:11.794 13:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:12.054 13:32:51 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:12.054 13:32:51 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.312 [2024-07-10 13:32:51.448272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.312 [2024-07-10 13:32:51.448314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.312 [2024-07-10 13:32:51.448335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88a780 00:09:12.312 [2024-07-10 13:32:51.448341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.312 [2024-07-10 13:32:51.448431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.312 [2024-07-10 13:32:51.448442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.312 [2024-07-10 13:32:51.448456] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:12.312 [2024-07-10 13:32:51.448462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.312 pt2 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:12.312 [2024-07-10 13:32:51.636292] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.312 13:32:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.571 13:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:12.571 "name": "raid_bdev1", 00:09:12.571 "uuid": "e5cc705d-3ec0-11ef-b9c4-5b09e08d4792", 00:09:12.571 "strip_size_kb": 64, 00:09:12.571 "state": "configuring", 00:09:12.571 "raid_level": "concat", 00:09:12.571 "superblock": true, 00:09:12.571 "num_base_bdevs": 4, 00:09:12.571 "num_base_bdevs_discovered": 1, 00:09:12.571 "num_base_bdevs_operational": 4, 00:09:12.571 "base_bdevs_list": [ 00:09:12.571 { 00:09:12.571 "name": "pt1", 00:09:12.571 "uuid": "658b50e7-7882-4152-9294-0885ed8bfcca", 00:09:12.571 "is_configured": true, 00:09:12.571 "data_offset": 2048, 00:09:12.571 "data_size": 63488 00:09:12.571 }, 00:09:12.571 { 00:09:12.571 "name": null, 00:09:12.571 "uuid": "fdbd8bc4-8761-bc5b-bee2-d7fe523ed325", 00:09:12.571 "is_configured": false, 00:09:12.571 "data_offset": 2048, 00:09:12.571 "data_size": 63488 00:09:12.571 }, 00:09:12.571 { 00:09:12.571 "name": null, 00:09:12.571 "uuid": "13976c45-2d5c-695a-9ca9-3f0c3d055818", 00:09:12.571 "is_configured": false, 00:09:12.571 "data_offset": 2048, 00:09:12.571 "data_size": 63488 00:09:12.571 }, 00:09:12.571 { 00:09:12.571 "name": null, 00:09:12.571 "uuid": "04fa973d-fe71-bb53-95a9-fb37bbf6abbe", 00:09:12.571 "is_configured": false, 00:09:12.571 "data_offset": 2048, 00:09:12.571 "data_size": 63488 00:09:12.571 } 00:09:12.571 ] 00:09:12.571 }' 00:09:12.571 13:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:12.571 13:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:12.830 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:12.830 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:12.830 13:32:52 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.087 [2024-07-10 13:32:52.284387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.087 [2024-07-10 13:32:52.284427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.087 [2024-07-10 13:32:52.284446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88a780 00:09:13.088 [2024-07-10 13:32:52.284451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.088 [2024-07-10 13:32:52.284552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.088 [2024-07-10 13:32:52.284559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.088 [2024-07-10 13:32:52.284581] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:13.088 [2024-07-10 13:32:52.284586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.088 pt2 00:09:13.088 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:13.088 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:13.088 13:32:52 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:13.346 [2024-07-10 13:32:52.472413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:13.346 [2024-07-10 13:32:52.472448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.346 [2024-07-10 13:32:52.472461] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88bb80 00:09:13.346 [2024-07-10 13:32:52.472467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.346 [2024-07-10 13:32:52.472518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.346 [2024-07-10 13:32:52.472524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:13.346 [2024-07-10 13:32:52.472536] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:13.346 [2024-07-10 13:32:52.472541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:13.346 pt3 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:13.346 [2024-07-10 13:32:52.664437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:13.346 [2024-07-10 13:32:52.664474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.346 [2024-07-10 13:32:52.664487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d88b900 00:09:13.346 [2024-07-10 13:32:52.664493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.346 [2024-07-10 13:32:52.664544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.346 [2024-07-10 13:32:52.664550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:13.346 [2024-07-10 13:32:52.664562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:13.346 [2024-07-10 13:32:52.664568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:13.346 [2024-07-10 13:32:52.664587] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d88ac80 00:09:13.346 [2024-07-10 13:32:52.664590] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:13.346 [2024-07-10 13:32:52.664605] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d8ede20 00:09:13.346 [2024-07-10 13:32:52.664637] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d88ac80 00:09:13.346 [2024-07-10 13:32:52.664640] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d88ac80 00:09:13.346 [2024-07-10 13:32:52.664654] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.346 pt4 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.346 13:32:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.605 13:32:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:13.605 "name": "raid_bdev1", 00:09:13.605 "uuid": "e5cc705d-3ec0-11ef-b9c4-5b09e08d4792", 00:09:13.605 "strip_size_kb": 64, 00:09:13.605 "state": "online", 00:09:13.605 "raid_level": "concat", 00:09:13.605 "superblock": true, 00:09:13.605 "num_base_bdevs": 4, 00:09:13.605 "num_base_bdevs_discovered": 4, 00:09:13.605 "num_base_bdevs_operational": 4, 00:09:13.605 "base_bdevs_list": [ 00:09:13.605 { 00:09:13.605 "name": "pt1", 00:09:13.605 "uuid": "658b50e7-7882-4152-9294-0885ed8bfcca", 00:09:13.605 "is_configured": true, 00:09:13.605 "data_offset": 2048, 00:09:13.605 "data_size": 63488 00:09:13.605 }, 00:09:13.605 { 00:09:13.605 "name": "pt2", 00:09:13.605 "uuid": "fdbd8bc4-8761-bc5b-bee2-d7fe523ed325", 00:09:13.605 "is_configured": true, 00:09:13.605 "data_offset": 2048, 00:09:13.605 "data_size": 63488 00:09:13.605 }, 00:09:13.605 { 00:09:13.605 "name": "pt3", 00:09:13.605 "uuid": "13976c45-2d5c-695a-9ca9-3f0c3d055818", 00:09:13.605 "is_configured": true, 00:09:13.605 "data_offset": 2048, 00:09:13.605 "data_size": 63488 00:09:13.605 }, 00:09:13.605 { 00:09:13.605 "name": "pt4", 00:09:13.605 "uuid": "04fa973d-fe71-bb53-95a9-fb37bbf6abbe", 00:09:13.605 "is_configured": true, 00:09:13.605 "data_offset": 2048, 00:09:13.605 "data_size": 63488 00:09:13.605 } 00:09:13.605 ] 00:09:13.605 }' 00:09:13.605 13:32:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:13.605 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.862 13:32:53 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:13.862 13:32:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:14.120 [2024-07-10 13:32:53.312553] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.120 13:32:53 -- bdev/bdev_raid.sh@430 -- # '[' e5cc705d-3ec0-11ef-b9c4-5b09e08d4792 '!=' e5cc705d-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:09:14.120 13:32:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:09:14.120 13:32:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:14.120 13:32:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:14.120 13:32:53 -- bdev/bdev_raid.sh@511 -- # killprocess 52701 00:09:14.120 13:32:53 -- common/autotest_common.sh@926 -- # '[' -z 52701 ']' 00:09:14.120 13:32:53 -- common/autotest_common.sh@930 -- # kill -0 52701 00:09:14.120 13:32:53 -- common/autotest_common.sh@931 -- # uname 00:09:14.120 13:32:53 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:14.120 13:32:53 -- common/autotest_common.sh@934 -- # ps -c -o command 52701 00:09:14.120 13:32:53 -- common/autotest_common.sh@934 -- # tail -1 00:09:14.120 13:32:53 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:14.120 13:32:53 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:14.120 killing process with pid 52701 00:09:14.120 13:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52701' 00:09:14.120 13:32:53 -- common/autotest_common.sh@945 -- # kill 52701 00:09:14.120 [2024-07-10 13:32:53.346270] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.120 [2024-07-10 13:32:53.346287] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.120 [2024-07-10 13:32:53.346311] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.120 [2024-07-10 13:32:53.346315] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d88ac80 name raid_bdev1, state offline 00:09:14.120 13:32:53 -- common/autotest_common.sh@950 -- # wait 52701 00:09:14.120 [2024-07-10 13:32:53.364704] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:14.384 00:09:14.384 real 0m7.889s 00:09:14.384 user 0m13.754s 00:09:14.384 sys 0m1.288s 00:09:14.384 13:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.384 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 ************************************ 00:09:14.384 END TEST raid_superblock_test 00:09:14.384 ************************************ 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:14.384 13:32:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:14.384 13:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.384 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 ************************************ 00:09:14.384 START TEST raid_state_function_test 00:09:14.384 ************************************ 00:09:14.384 13:32:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=52886 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52886' 00:09:14.384 Process raid pid: 52886 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:14.384 13:32:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52886 /var/tmp/spdk-raid.sock 00:09:14.384 13:32:53 -- common/autotest_common.sh@819 -- # '[' -z 52886 ']' 00:09:14.384 13:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:14.384 13:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:14.384 13:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:14.384 13:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.384 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 [2024-07-10 13:32:53.579893] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:14.384 [2024-07-10 13:32:53.580237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:14.667 EAL: TSC is not safe to use in SMP mode 00:09:14.667 EAL: TSC is not invariant 00:09:14.667 [2024-07-10 13:32:54.017353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.926 [2024-07-10 13:32:54.106673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.926 [2024-07-10 13:32:54.107093] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.926 [2024-07-10 13:32:54.107102] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.184 13:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:15.184 13:32:54 -- common/autotest_common.sh@852 -- # return 0 00:09:15.184 13:32:54 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:15.443 [2024-07-10 13:32:54.669990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.443 [2024-07-10 13:32:54.670035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.443 [2024-07-10 13:32:54.670039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.443 [2024-07-10 13:32:54.670045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.443 [2024-07-10 13:32:54.670047] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.443 [2024-07-10 13:32:54.670053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.443 [2024-07-10 13:32:54.670055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:15.443 [2024-07-10 13:32:54.670060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.443 13:32:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.723 13:32:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:15.723 "name": "Existed_Raid", 00:09:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.723 "strip_size_kb": 0, 00:09:15.723 "state": "configuring", 00:09:15.723 "raid_level": "raid1", 00:09:15.723 "superblock": false, 00:09:15.723 "num_base_bdevs": 4, 00:09:15.723 "num_base_bdevs_discovered": 0, 00:09:15.723 "num_base_bdevs_operational": 4, 00:09:15.723 "base_bdevs_list": [ 00:09:15.723 { 00:09:15.723 "name": "BaseBdev1", 00:09:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.723 "is_configured": false, 00:09:15.723 "data_offset": 0, 00:09:15.723 "data_size": 0 00:09:15.723 }, 00:09:15.723 { 00:09:15.723 "name": "BaseBdev2", 00:09:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.723 "is_configured": false, 00:09:15.723 "data_offset": 0, 00:09:15.723 "data_size": 0 00:09:15.723 }, 00:09:15.723 { 00:09:15.723 "name": "BaseBdev3", 00:09:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.723 "is_configured": false, 00:09:15.723 "data_offset": 0, 00:09:15.723 "data_size": 0 00:09:15.723 }, 00:09:15.723 { 00:09:15.723 "name": "BaseBdev4", 00:09:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.723 "is_configured": false, 00:09:15.723 "data_offset": 0, 00:09:15.723 "data_size": 0 00:09:15.723 } 00:09:15.723 ] 00:09:15.723 }' 00:09:15.723 13:32:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:15.723 13:32:54 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 13:32:55 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.981 [2024-07-10 13:32:55.334067] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.981 [2024-07-10 13:32:55.334086] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8b4500 name Existed_Raid, state configuring 00:09:15.981 13:32:55 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:16.239 [2024-07-10 13:32:55.502098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.239 [2024-07-10 13:32:55.502129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.239 [2024-07-10 13:32:55.502132] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.239 [2024-07-10 13:32:55.502138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.239 [2024-07-10 13:32:55.502140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.239 [2024-07-10 13:32:55.502145] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.239 [2024-07-10 13:32:55.502148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:16.239 [2024-07-10 13:32:55.502152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:16.239 13:32:55 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.497 [2024-07-10 13:32:55.690882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.497 BaseBdev1 00:09:16.497 13:32:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:16.497 13:32:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:16.497 13:32:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:16.497 13:32:55 -- common/autotest_common.sh@889 -- # local i 00:09:16.497 13:32:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:16.497 13:32:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:16.497 13:32:55 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.754 13:32:55 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.754 [ 00:09:16.754 { 00:09:16.754 "name": "BaseBdev1", 00:09:16.755 "aliases": [ 00:09:16.755 "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792" 00:09:16.755 ], 00:09:16.755 "product_name": "Malloc disk", 00:09:16.755 "block_size": 512, 00:09:16.755 "num_blocks": 65536, 00:09:16.755 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:16.755 "assigned_rate_limits": { 00:09:16.755 "rw_ios_per_sec": 0, 00:09:16.755 "rw_mbytes_per_sec": 0, 00:09:16.755 "r_mbytes_per_sec": 0, 00:09:16.755 "w_mbytes_per_sec": 0 00:09:16.755 }, 00:09:16.755 "claimed": true, 00:09:16.755 "claim_type": "exclusive_write", 00:09:16.755 "zoned": false, 00:09:16.755 "supported_io_types": { 00:09:16.755 "read": true, 00:09:16.755 "write": true, 00:09:16.755 "unmap": true, 00:09:16.755 "write_zeroes": true, 00:09:16.755 "flush": true, 00:09:16.755 "reset": true, 00:09:16.755 "compare": false, 00:09:16.755 "compare_and_write": false, 00:09:16.755 "abort": true, 00:09:16.755 "nvme_admin": false, 00:09:16.755 "nvme_io": false 00:09:16.755 }, 00:09:16.755 "memory_domains": [ 00:09:16.755 { 00:09:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.755 "dma_device_type": 2 00:09:16.755 } 00:09:16.755 ], 00:09:16.755 "driver_specific": {} 00:09:16.755 } 00:09:16.755 ] 00:09:16.755 13:32:56 -- common/autotest_common.sh@895 -- # return 0 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.755 13:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.011 13:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:17.011 "name": "Existed_Raid", 00:09:17.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.011 "strip_size_kb": 0, 00:09:17.011 "state": "configuring", 00:09:17.011 "raid_level": "raid1", 00:09:17.011 "superblock": false, 00:09:17.011 "num_base_bdevs": 4, 00:09:17.011 "num_base_bdevs_discovered": 1, 00:09:17.011 "num_base_bdevs_operational": 4, 00:09:17.011 "base_bdevs_list": [ 00:09:17.011 { 00:09:17.011 "name": "BaseBdev1", 00:09:17.011 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:17.011 "is_configured": true, 00:09:17.011 "data_offset": 0, 00:09:17.011 "data_size": 65536 00:09:17.011 }, 00:09:17.011 { 00:09:17.011 "name": "BaseBdev2", 00:09:17.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.011 "is_configured": false, 00:09:17.011 "data_offset": 0, 00:09:17.011 "data_size": 0 00:09:17.011 }, 00:09:17.011 { 00:09:17.011 "name": "BaseBdev3", 00:09:17.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.011 "is_configured": false, 00:09:17.011 "data_offset": 0, 00:09:17.011 "data_size": 0 00:09:17.011 }, 00:09:17.011 { 00:09:17.012 "name": "BaseBdev4", 00:09:17.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.012 "is_configured": false, 00:09:17.012 "data_offset": 0, 00:09:17.012 "data_size": 0 00:09:17.012 } 00:09:17.012 ] 00:09:17.012 }' 00:09:17.012 13:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:17.012 13:32:56 -- common/autotest_common.sh@10 -- # set +x 00:09:17.310 13:32:56 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:17.568 [2024-07-10 13:32:56.726250] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.568 [2024-07-10 13:32:56.726274] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8b4500 name Existed_Raid, state configuring 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:17.568 [2024-07-10 13:32:56.914282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.568 [2024-07-10 13:32:56.914901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.568 [2024-07-10 13:32:56.914938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.568 [2024-07-10 13:32:56.914942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.568 [2024-07-10 13:32:56.914948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.568 [2024-07-10 13:32:56.914950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:17.568 [2024-07-10 13:32:56.914955] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.568 13:32:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.826 13:32:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:17.826 "name": "Existed_Raid", 00:09:17.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.826 "strip_size_kb": 0, 00:09:17.826 "state": "configuring", 00:09:17.826 "raid_level": "raid1", 00:09:17.826 "superblock": false, 00:09:17.826 "num_base_bdevs": 4, 00:09:17.826 "num_base_bdevs_discovered": 1, 00:09:17.826 "num_base_bdevs_operational": 4, 00:09:17.826 "base_bdevs_list": [ 00:09:17.826 { 00:09:17.826 "name": "BaseBdev1", 00:09:17.826 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:17.826 "is_configured": true, 00:09:17.826 "data_offset": 0, 00:09:17.826 "data_size": 65536 00:09:17.826 }, 00:09:17.826 { 00:09:17.826 "name": "BaseBdev2", 00:09:17.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.826 "is_configured": false, 00:09:17.826 "data_offset": 0, 00:09:17.826 "data_size": 0 00:09:17.826 }, 00:09:17.826 { 00:09:17.826 "name": "BaseBdev3", 00:09:17.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.826 "is_configured": false, 00:09:17.826 "data_offset": 0, 00:09:17.826 "data_size": 0 00:09:17.826 }, 00:09:17.826 { 00:09:17.826 "name": "BaseBdev4", 00:09:17.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.826 "is_configured": false, 00:09:17.826 "data_offset": 0, 00:09:17.826 "data_size": 0 00:09:17.826 } 00:09:17.826 ] 00:09:17.826 }' 00:09:17.826 13:32:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:17.826 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.083 13:32:57 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.340 [2024-07-10 13:32:57.570461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.340 BaseBdev2 00:09:18.340 13:32:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:18.340 13:32:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:18.340 13:32:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:18.340 13:32:57 -- common/autotest_common.sh@889 -- # local i 00:09:18.340 13:32:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:18.340 13:32:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:18.340 13:32:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.596 13:32:57 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.596 [ 00:09:18.596 { 00:09:18.596 "name": "BaseBdev2", 00:09:18.596 "aliases": [ 00:09:18.596 "eb65528e-3ec0-11ef-b9c4-5b09e08d4792" 00:09:18.596 ], 00:09:18.596 "product_name": "Malloc disk", 00:09:18.596 "block_size": 512, 00:09:18.596 "num_blocks": 65536, 00:09:18.596 "uuid": "eb65528e-3ec0-11ef-b9c4-5b09e08d4792", 00:09:18.596 "assigned_rate_limits": { 00:09:18.596 "rw_ios_per_sec": 0, 00:09:18.596 "rw_mbytes_per_sec": 0, 00:09:18.596 "r_mbytes_per_sec": 0, 00:09:18.596 "w_mbytes_per_sec": 0 00:09:18.596 }, 00:09:18.596 "claimed": true, 00:09:18.596 "claim_type": "exclusive_write", 00:09:18.596 "zoned": false, 00:09:18.596 "supported_io_types": { 00:09:18.596 "read": true, 00:09:18.596 "write": true, 00:09:18.596 "unmap": true, 00:09:18.597 "write_zeroes": true, 00:09:18.597 "flush": true, 00:09:18.597 "reset": true, 00:09:18.597 "compare": false, 00:09:18.597 "compare_and_write": false, 00:09:18.597 "abort": true, 00:09:18.597 "nvme_admin": false, 00:09:18.597 "nvme_io": false 00:09:18.597 }, 00:09:18.597 "memory_domains": [ 00:09:18.597 { 00:09:18.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.597 "dma_device_type": 2 00:09:18.597 } 00:09:18.597 ], 00:09:18.597 "driver_specific": {} 00:09:18.597 } 00:09:18.597 ] 00:09:18.597 13:32:57 -- common/autotest_common.sh@895 -- # return 0 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.597 13:32:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.854 13:32:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:18.854 "name": "Existed_Raid", 00:09:18.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.854 "strip_size_kb": 0, 00:09:18.854 "state": "configuring", 00:09:18.854 "raid_level": "raid1", 00:09:18.854 "superblock": false, 00:09:18.854 "num_base_bdevs": 4, 00:09:18.854 "num_base_bdevs_discovered": 2, 00:09:18.854 "num_base_bdevs_operational": 4, 00:09:18.854 "base_bdevs_list": [ 00:09:18.854 { 00:09:18.854 "name": "BaseBdev1", 00:09:18.854 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:18.854 "is_configured": true, 00:09:18.854 "data_offset": 0, 00:09:18.855 "data_size": 65536 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": "BaseBdev2", 00:09:18.855 "uuid": "eb65528e-3ec0-11ef-b9c4-5b09e08d4792", 00:09:18.855 "is_configured": true, 00:09:18.855 "data_offset": 0, 00:09:18.855 "data_size": 65536 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": "BaseBdev3", 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.855 "is_configured": false, 00:09:18.855 "data_offset": 0, 00:09:18.855 "data_size": 0 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": "BaseBdev4", 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.855 "is_configured": false, 00:09:18.855 "data_offset": 0, 00:09:18.855 "data_size": 0 00:09:18.855 } 00:09:18.855 ] 00:09:18.855 }' 00:09:18.855 13:32:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:18.855 13:32:58 -- common/autotest_common.sh@10 -- # set +x 00:09:19.112 13:32:58 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.370 [2024-07-10 13:32:58.582556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.370 BaseBdev3 00:09:19.370 13:32:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:19.370 13:32:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:19.370 13:32:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:19.370 13:32:58 -- common/autotest_common.sh@889 -- # local i 00:09:19.370 13:32:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:19.370 13:32:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:19.370 13:32:58 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:19.679 13:32:58 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.679 [ 00:09:19.679 { 00:09:19.679 "name": "BaseBdev3", 00:09:19.679 "aliases": [ 00:09:19.679 "ebffc2d7-3ec0-11ef-b9c4-5b09e08d4792" 00:09:19.679 ], 00:09:19.679 "product_name": "Malloc disk", 00:09:19.679 "block_size": 512, 00:09:19.679 "num_blocks": 65536, 00:09:19.679 "uuid": "ebffc2d7-3ec0-11ef-b9c4-5b09e08d4792", 00:09:19.679 "assigned_rate_limits": { 00:09:19.679 "rw_ios_per_sec": 0, 00:09:19.679 "rw_mbytes_per_sec": 0, 00:09:19.679 "r_mbytes_per_sec": 0, 00:09:19.679 "w_mbytes_per_sec": 0 00:09:19.679 }, 00:09:19.679 "claimed": true, 00:09:19.679 "claim_type": "exclusive_write", 00:09:19.679 "zoned": false, 00:09:19.679 "supported_io_types": { 00:09:19.679 "read": true, 00:09:19.679 "write": true, 00:09:19.679 "unmap": true, 00:09:19.679 "write_zeroes": true, 00:09:19.679 "flush": true, 00:09:19.679 "reset": true, 00:09:19.679 "compare": false, 00:09:19.679 "compare_and_write": false, 00:09:19.679 "abort": true, 00:09:19.679 "nvme_admin": false, 00:09:19.679 "nvme_io": false 00:09:19.679 }, 00:09:19.679 "memory_domains": [ 00:09:19.679 { 00:09:19.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.679 "dma_device_type": 2 00:09:19.679 } 00:09:19.679 ], 00:09:19.679 "driver_specific": {} 00:09:19.679 } 00:09:19.679 ] 00:09:19.679 13:32:58 -- common/autotest_common.sh@895 -- # return 0 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.679 13:32:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.937 13:32:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:19.937 "name": "Existed_Raid", 00:09:19.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.937 "strip_size_kb": 0, 00:09:19.937 "state": "configuring", 00:09:19.937 "raid_level": "raid1", 00:09:19.937 "superblock": false, 00:09:19.937 "num_base_bdevs": 4, 00:09:19.937 "num_base_bdevs_discovered": 3, 00:09:19.937 "num_base_bdevs_operational": 4, 00:09:19.937 "base_bdevs_list": [ 00:09:19.937 { 00:09:19.937 "name": "BaseBdev1", 00:09:19.937 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:19.937 "is_configured": true, 00:09:19.937 "data_offset": 0, 00:09:19.938 "data_size": 65536 00:09:19.938 }, 00:09:19.938 { 00:09:19.938 "name": "BaseBdev2", 00:09:19.938 "uuid": "eb65528e-3ec0-11ef-b9c4-5b09e08d4792", 00:09:19.938 "is_configured": true, 00:09:19.938 "data_offset": 0, 00:09:19.938 "data_size": 65536 00:09:19.938 }, 00:09:19.938 { 00:09:19.938 "name": "BaseBdev3", 00:09:19.938 "uuid": "ebffc2d7-3ec0-11ef-b9c4-5b09e08d4792", 00:09:19.938 "is_configured": true, 00:09:19.938 "data_offset": 0, 00:09:19.938 "data_size": 65536 00:09:19.938 }, 00:09:19.938 { 00:09:19.938 "name": "BaseBdev4", 00:09:19.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.938 "is_configured": false, 00:09:19.938 "data_offset": 0, 00:09:19.938 "data_size": 0 00:09:19.938 } 00:09:19.938 ] 00:09:19.938 }' 00:09:19.938 13:32:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:19.938 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:09:20.196 13:32:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:20.454 [2024-07-10 13:32:59.598689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:20.454 [2024-07-10 13:32:59.598713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8b4a00 00:09:20.454 [2024-07-10 13:32:59.598716] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:20.454 [2024-07-10 13:32:59.598737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b917ec0 00:09:20.454 [2024-07-10 13:32:59.598829] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8b4a00 00:09:20.454 [2024-07-10 13:32:59.598832] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b8b4a00 00:09:20.454 [2024-07-10 13:32:59.598855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.454 BaseBdev4 00:09:20.454 13:32:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:20.454 13:32:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:20.454 13:32:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:20.454 13:32:59 -- common/autotest_common.sh@889 -- # local i 00:09:20.454 13:32:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:20.454 13:32:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:20.454 13:32:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:20.454 13:32:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:20.713 [ 00:09:20.713 { 00:09:20.713 "name": "BaseBdev4", 00:09:20.713 "aliases": [ 00:09:20.713 "ec9acf77-3ec0-11ef-b9c4-5b09e08d4792" 00:09:20.713 ], 00:09:20.713 "product_name": "Malloc disk", 00:09:20.713 "block_size": 512, 00:09:20.713 "num_blocks": 65536, 00:09:20.713 "uuid": "ec9acf77-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.713 "assigned_rate_limits": { 00:09:20.713 "rw_ios_per_sec": 0, 00:09:20.713 "rw_mbytes_per_sec": 0, 00:09:20.713 "r_mbytes_per_sec": 0, 00:09:20.713 "w_mbytes_per_sec": 0 00:09:20.713 }, 00:09:20.713 "claimed": true, 00:09:20.713 "claim_type": "exclusive_write", 00:09:20.713 "zoned": false, 00:09:20.713 "supported_io_types": { 00:09:20.713 "read": true, 00:09:20.713 "write": true, 00:09:20.713 "unmap": true, 00:09:20.713 "write_zeroes": true, 00:09:20.713 "flush": true, 00:09:20.713 "reset": true, 00:09:20.713 "compare": false, 00:09:20.713 "compare_and_write": false, 00:09:20.713 "abort": true, 00:09:20.713 "nvme_admin": false, 00:09:20.713 "nvme_io": false 00:09:20.713 }, 00:09:20.713 "memory_domains": [ 00:09:20.713 { 00:09:20.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.713 "dma_device_type": 2 00:09:20.713 } 00:09:20.713 ], 00:09:20.713 "driver_specific": {} 00:09:20.713 } 00:09:20.713 ] 00:09:20.713 13:32:59 -- common/autotest_common.sh@895 -- # return 0 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.713 13:32:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.971 13:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:20.971 "name": "Existed_Raid", 00:09:20.971 "uuid": "ec9ad308-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.971 "strip_size_kb": 0, 00:09:20.971 "state": "online", 00:09:20.971 "raid_level": "raid1", 00:09:20.971 "superblock": false, 00:09:20.971 "num_base_bdevs": 4, 00:09:20.971 "num_base_bdevs_discovered": 4, 00:09:20.971 "num_base_bdevs_operational": 4, 00:09:20.971 "base_bdevs_list": [ 00:09:20.971 { 00:09:20.971 "name": "BaseBdev1", 00:09:20.971 "uuid": "ea466bbb-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.971 "is_configured": true, 00:09:20.971 "data_offset": 0, 00:09:20.971 "data_size": 65536 00:09:20.971 }, 00:09:20.971 { 00:09:20.971 "name": "BaseBdev2", 00:09:20.971 "uuid": "eb65528e-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.971 "is_configured": true, 00:09:20.971 "data_offset": 0, 00:09:20.971 "data_size": 65536 00:09:20.971 }, 00:09:20.971 { 00:09:20.971 "name": "BaseBdev3", 00:09:20.971 "uuid": "ebffc2d7-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.971 "is_configured": true, 00:09:20.971 "data_offset": 0, 00:09:20.971 "data_size": 65536 00:09:20.971 }, 00:09:20.971 { 00:09:20.971 "name": "BaseBdev4", 00:09:20.971 "uuid": "ec9acf77-3ec0-11ef-b9c4-5b09e08d4792", 00:09:20.971 "is_configured": true, 00:09:20.971 "data_offset": 0, 00:09:20.971 "data_size": 65536 00:09:20.971 } 00:09:20.971 ] 00:09:20.971 }' 00:09:20.971 13:33:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:20.971 13:33:00 -- common/autotest_common.sh@10 -- # set +x 00:09:21.229 13:33:00 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:21.488 [2024-07-10 13:33:00.622752] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:21.488 "name": "Existed_Raid", 00:09:21.488 "uuid": "ec9ad308-3ec0-11ef-b9c4-5b09e08d4792", 00:09:21.488 "strip_size_kb": 0, 00:09:21.488 "state": "online", 00:09:21.488 "raid_level": "raid1", 00:09:21.488 "superblock": false, 00:09:21.488 "num_base_bdevs": 4, 00:09:21.488 "num_base_bdevs_discovered": 3, 00:09:21.488 "num_base_bdevs_operational": 3, 00:09:21.488 "base_bdevs_list": [ 00:09:21.488 { 00:09:21.488 "name": null, 00:09:21.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.488 "is_configured": false, 00:09:21.488 "data_offset": 0, 00:09:21.488 "data_size": 65536 00:09:21.488 }, 00:09:21.488 { 00:09:21.488 "name": "BaseBdev2", 00:09:21.488 "uuid": "eb65528e-3ec0-11ef-b9c4-5b09e08d4792", 00:09:21.488 "is_configured": true, 00:09:21.488 "data_offset": 0, 00:09:21.488 "data_size": 65536 00:09:21.488 }, 00:09:21.488 { 00:09:21.488 "name": "BaseBdev3", 00:09:21.488 "uuid": "ebffc2d7-3ec0-11ef-b9c4-5b09e08d4792", 00:09:21.488 "is_configured": true, 00:09:21.488 "data_offset": 0, 00:09:21.488 "data_size": 65536 00:09:21.488 }, 00:09:21.488 { 00:09:21.488 "name": "BaseBdev4", 00:09:21.488 "uuid": "ec9acf77-3ec0-11ef-b9c4-5b09e08d4792", 00:09:21.488 "is_configured": true, 00:09:21.488 "data_offset": 0, 00:09:21.488 "data_size": 65536 00:09:21.488 } 00:09:21.488 ] 00:09:21.488 }' 00:09:21.488 13:33:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:21.488 13:33:00 -- common/autotest_common.sh@10 -- # set +x 00:09:21.747 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:21.747 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.747 13:33:01 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.747 13:33:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:22.005 13:33:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:22.005 13:33:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.005 13:33:01 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:22.263 [2024-07-10 13:33:01.451474] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.263 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:22.263 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:22.263 13:33:01 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.263 13:33:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:22.521 13:33:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:22.521 13:33:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.521 13:33:01 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:22.521 [2024-07-10 13:33:01.808217] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.522 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:22.522 13:33:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:22.522 13:33:01 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.522 13:33:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:22.779 13:33:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:22.779 13:33:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.779 13:33:02 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:23.036 [2024-07-10 13:33:02.180902] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:23.036 [2024-07-10 13:33:02.180917] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.036 [2024-07-10 13:33:02.180925] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.036 [2024-07-10 13:33:02.185534] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.036 [2024-07-10 13:33:02.185550] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8b4a00 name Existed_Raid, state offline 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:23.036 13:33:02 -- bdev/bdev_raid.sh@287 -- # killprocess 52886 00:09:23.036 13:33:02 -- common/autotest_common.sh@926 -- # '[' -z 52886 ']' 00:09:23.036 13:33:02 -- common/autotest_common.sh@930 -- # kill -0 52886 00:09:23.036 13:33:02 -- common/autotest_common.sh@931 -- # uname 00:09:23.295 13:33:02 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:23.295 13:33:02 -- common/autotest_common.sh@934 -- # ps -c -o command 52886 00:09:23.295 13:33:02 -- common/autotest_common.sh@934 -- # tail -1 00:09:23.295 13:33:02 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:23.295 13:33:02 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:23.295 killing process with pid 52886 00:09:23.295 13:33:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52886' 00:09:23.295 13:33:02 -- common/autotest_common.sh@945 -- # kill 52886 00:09:23.295 [2024-07-10 13:33:02.415678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.295 13:33:02 -- common/autotest_common.sh@950 -- # wait 52886 00:09:23.295 [2024-07-10 13:33:02.415724] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:23.295 00:09:23.295 real 0m9.003s 00:09:23.295 user 0m15.612s 00:09:23.295 sys 0m1.707s 00:09:23.295 13:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.295 13:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:23.295 ************************************ 00:09:23.295 END TEST raid_state_function_test 00:09:23.295 ************************************ 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:09:23.295 13:33:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:23.295 13:33:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:23.295 13:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:23.295 ************************************ 00:09:23.295 START TEST raid_state_function_test_sb 00:09:23.295 ************************************ 00:09:23.295 13:33:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=53156 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53156' 00:09:23.295 Process raid pid: 53156 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:23.295 13:33:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53156 /var/tmp/spdk-raid.sock 00:09:23.295 13:33:02 -- common/autotest_common.sh@819 -- # '[' -z 53156 ']' 00:09:23.295 13:33:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:23.295 13:33:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:23.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:23.295 13:33:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:23.295 13:33:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:23.295 13:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:23.295 [2024-07-10 13:33:02.628771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:23.295 [2024-07-10 13:33:02.629069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:23.861 EAL: TSC is not safe to use in SMP mode 00:09:23.861 EAL: TSC is not invariant 00:09:23.861 [2024-07-10 13:33:03.059841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.861 [2024-07-10 13:33:03.149548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.861 [2024-07-10 13:33:03.149990] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.861 [2024-07-10 13:33:03.150016] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.425 13:33:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.425 13:33:03 -- common/autotest_common.sh@852 -- # return 0 00:09:24.425 13:33:03 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:24.425 [2024-07-10 13:33:03.713108] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.425 [2024-07-10 13:33:03.713166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.425 [2024-07-10 13:33:03.713169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.425 [2024-07-10 13:33:03.713175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.425 [2024-07-10 13:33:03.713178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.425 [2024-07-10 13:33:03.713183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.425 [2024-07-10 13:33:03.713185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:24.425 [2024-07-10 13:33:03.713190] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:24.425 13:33:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:24.425 13:33:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.426 13:33:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.739 13:33:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:24.740 "name": "Existed_Raid", 00:09:24.740 "uuid": "ef0ea113-3ec0-11ef-b9c4-5b09e08d4792", 00:09:24.740 "strip_size_kb": 0, 00:09:24.740 "state": "configuring", 00:09:24.740 "raid_level": "raid1", 00:09:24.740 "superblock": true, 00:09:24.740 "num_base_bdevs": 4, 00:09:24.740 "num_base_bdevs_discovered": 0, 00:09:24.740 "num_base_bdevs_operational": 4, 00:09:24.740 "base_bdevs_list": [ 00:09:24.740 { 00:09:24.740 "name": "BaseBdev1", 00:09:24.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.740 "is_configured": false, 00:09:24.740 "data_offset": 0, 00:09:24.740 "data_size": 0 00:09:24.740 }, 00:09:24.740 { 00:09:24.740 "name": "BaseBdev2", 00:09:24.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.740 "is_configured": false, 00:09:24.740 "data_offset": 0, 00:09:24.740 "data_size": 0 00:09:24.740 }, 00:09:24.740 { 00:09:24.740 "name": "BaseBdev3", 00:09:24.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.740 "is_configured": false, 00:09:24.740 "data_offset": 0, 00:09:24.740 "data_size": 0 00:09:24.740 }, 00:09:24.740 { 00:09:24.740 "name": "BaseBdev4", 00:09:24.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.740 "is_configured": false, 00:09:24.740 "data_offset": 0, 00:09:24.740 "data_size": 0 00:09:24.740 } 00:09:24.740 ] 00:09:24.740 }' 00:09:24.740 13:33:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:24.740 13:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 13:33:04 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:24.999 [2024-07-10 13:33:04.349155] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.999 [2024-07-10 13:33:04.349176] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cec4500 name Existed_Raid, state configuring 00:09:24.999 13:33:04 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:25.292 [2024-07-10 13:33:04.533206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.293 [2024-07-10 13:33:04.533245] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.293 [2024-07-10 13:33:04.533249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.293 [2024-07-10 13:33:04.533256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.293 [2024-07-10 13:33:04.533259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.293 [2024-07-10 13:33:04.533265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.293 [2024-07-10 13:33:04.533268] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.293 [2024-07-10 13:33:04.533274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.293 13:33:04 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.551 [2024-07-10 13:33:04.721976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.551 BaseBdev1 00:09:25.551 13:33:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:25.551 13:33:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:25.551 13:33:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:25.551 13:33:04 -- common/autotest_common.sh@889 -- # local i 00:09:25.551 13:33:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:25.552 13:33:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:25.552 13:33:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:25.811 13:33:04 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.812 [ 00:09:25.812 { 00:09:25.812 "name": "BaseBdev1", 00:09:25.812 "aliases": [ 00:09:25.812 "efa874bf-3ec0-11ef-b9c4-5b09e08d4792" 00:09:25.812 ], 00:09:25.812 "product_name": "Malloc disk", 00:09:25.812 "block_size": 512, 00:09:25.812 "num_blocks": 65536, 00:09:25.812 "uuid": "efa874bf-3ec0-11ef-b9c4-5b09e08d4792", 00:09:25.812 "assigned_rate_limits": { 00:09:25.812 "rw_ios_per_sec": 0, 00:09:25.812 "rw_mbytes_per_sec": 0, 00:09:25.812 "r_mbytes_per_sec": 0, 00:09:25.812 "w_mbytes_per_sec": 0 00:09:25.812 }, 00:09:25.812 "claimed": true, 00:09:25.812 "claim_type": "exclusive_write", 00:09:25.812 "zoned": false, 00:09:25.812 "supported_io_types": { 00:09:25.812 "read": true, 00:09:25.812 "write": true, 00:09:25.812 "unmap": true, 00:09:25.812 "write_zeroes": true, 00:09:25.812 "flush": true, 00:09:25.812 "reset": true, 00:09:25.812 "compare": false, 00:09:25.812 "compare_and_write": false, 00:09:25.812 "abort": true, 00:09:25.812 "nvme_admin": false, 00:09:25.812 "nvme_io": false 00:09:25.812 }, 00:09:25.812 "memory_domains": [ 00:09:25.812 { 00:09:25.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.812 "dma_device_type": 2 00:09:25.812 } 00:09:25.812 ], 00:09:25.812 "driver_specific": {} 00:09:25.812 } 00:09:25.812 ] 00:09:25.812 13:33:05 -- common/autotest_common.sh@895 -- # return 0 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.812 13:33:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.072 13:33:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:26.072 "name": "Existed_Raid", 00:09:26.072 "uuid": "ef8bc449-3ec0-11ef-b9c4-5b09e08d4792", 00:09:26.072 "strip_size_kb": 0, 00:09:26.072 "state": "configuring", 00:09:26.072 "raid_level": "raid1", 00:09:26.072 "superblock": true, 00:09:26.072 "num_base_bdevs": 4, 00:09:26.072 "num_base_bdevs_discovered": 1, 00:09:26.072 "num_base_bdevs_operational": 4, 00:09:26.072 "base_bdevs_list": [ 00:09:26.072 { 00:09:26.072 "name": "BaseBdev1", 00:09:26.072 "uuid": "efa874bf-3ec0-11ef-b9c4-5b09e08d4792", 00:09:26.072 "is_configured": true, 00:09:26.072 "data_offset": 2048, 00:09:26.072 "data_size": 63488 00:09:26.072 }, 00:09:26.072 { 00:09:26.072 "name": "BaseBdev2", 00:09:26.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.072 "is_configured": false, 00:09:26.072 "data_offset": 0, 00:09:26.072 "data_size": 0 00:09:26.072 }, 00:09:26.072 { 00:09:26.072 "name": "BaseBdev3", 00:09:26.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.072 "is_configured": false, 00:09:26.072 "data_offset": 0, 00:09:26.072 "data_size": 0 00:09:26.072 }, 00:09:26.072 { 00:09:26.072 "name": "BaseBdev4", 00:09:26.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.072 "is_configured": false, 00:09:26.072 "data_offset": 0, 00:09:26.072 "data_size": 0 00:09:26.072 } 00:09:26.072 ] 00:09:26.072 }' 00:09:26.072 13:33:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:26.072 13:33:05 -- common/autotest_common.sh@10 -- # set +x 00:09:26.332 13:33:05 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:26.591 [2024-07-10 13:33:05.785362] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.591 [2024-07-10 13:33:05.785398] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cec4500 name Existed_Raid, state configuring 00:09:26.591 13:33:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:26.591 13:33:05 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:26.850 13:33:05 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.850 BaseBdev1 00:09:26.850 13:33:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:26.850 13:33:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:26.850 13:33:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:26.850 13:33:06 -- common/autotest_common.sh@889 -- # local i 00:09:26.850 13:33:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:26.850 13:33:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:26.850 13:33:06 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:27.108 13:33:06 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.367 [ 00:09:27.367 { 00:09:27.367 "name": "BaseBdev1", 00:09:27.367 "aliases": [ 00:09:27.367 "f0844d14-3ec0-11ef-b9c4-5b09e08d4792" 00:09:27.367 ], 00:09:27.367 "product_name": "Malloc disk", 00:09:27.367 "block_size": 512, 00:09:27.367 "num_blocks": 65536, 00:09:27.367 "uuid": "f0844d14-3ec0-11ef-b9c4-5b09e08d4792", 00:09:27.367 "assigned_rate_limits": { 00:09:27.367 "rw_ios_per_sec": 0, 00:09:27.367 "rw_mbytes_per_sec": 0, 00:09:27.367 "r_mbytes_per_sec": 0, 00:09:27.367 "w_mbytes_per_sec": 0 00:09:27.367 }, 00:09:27.367 "claimed": false, 00:09:27.367 "zoned": false, 00:09:27.367 "supported_io_types": { 00:09:27.367 "read": true, 00:09:27.367 "write": true, 00:09:27.367 "unmap": true, 00:09:27.367 "write_zeroes": true, 00:09:27.367 "flush": true, 00:09:27.367 "reset": true, 00:09:27.367 "compare": false, 00:09:27.367 "compare_and_write": false, 00:09:27.367 "abort": true, 00:09:27.367 "nvme_admin": false, 00:09:27.367 "nvme_io": false 00:09:27.367 }, 00:09:27.367 "memory_domains": [ 00:09:27.367 { 00:09:27.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.367 "dma_device_type": 2 00:09:27.367 } 00:09:27.367 ], 00:09:27.367 "driver_specific": {} 00:09:27.367 } 00:09:27.367 ] 00:09:27.367 13:33:06 -- common/autotest_common.sh@895 -- # return 0 00:09:27.367 13:33:06 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:27.367 [2024-07-10 13:33:06.734087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.367 [2024-07-10 13:33:06.734493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.367 [2024-07-10 13:33:06.734536] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.367 [2024-07-10 13:33:06.734539] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.367 [2024-07-10 13:33:06.734546] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.367 [2024-07-10 13:33:06.734549] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.367 [2024-07-10 13:33:06.734554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:27.626 "name": "Existed_Raid", 00:09:27.626 "uuid": "f0db9809-3ec0-11ef-b9c4-5b09e08d4792", 00:09:27.626 "strip_size_kb": 0, 00:09:27.626 "state": "configuring", 00:09:27.626 "raid_level": "raid1", 00:09:27.626 "superblock": true, 00:09:27.626 "num_base_bdevs": 4, 00:09:27.626 "num_base_bdevs_discovered": 1, 00:09:27.626 "num_base_bdevs_operational": 4, 00:09:27.626 "base_bdevs_list": [ 00:09:27.626 { 00:09:27.626 "name": "BaseBdev1", 00:09:27.626 "uuid": "f0844d14-3ec0-11ef-b9c4-5b09e08d4792", 00:09:27.626 "is_configured": true, 00:09:27.626 "data_offset": 2048, 00:09:27.626 "data_size": 63488 00:09:27.626 }, 00:09:27.626 { 00:09:27.626 "name": "BaseBdev2", 00:09:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.626 "is_configured": false, 00:09:27.626 "data_offset": 0, 00:09:27.626 "data_size": 0 00:09:27.626 }, 00:09:27.626 { 00:09:27.626 "name": "BaseBdev3", 00:09:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.626 "is_configured": false, 00:09:27.626 "data_offset": 0, 00:09:27.626 "data_size": 0 00:09:27.626 }, 00:09:27.626 { 00:09:27.626 "name": "BaseBdev4", 00:09:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.626 "is_configured": false, 00:09:27.626 "data_offset": 0, 00:09:27.626 "data_size": 0 00:09:27.626 } 00:09:27.626 ] 00:09:27.626 }' 00:09:27.626 13:33:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:27.626 13:33:06 -- common/autotest_common.sh@10 -- # set +x 00:09:27.885 13:33:07 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.143 [2024-07-10 13:33:07.370233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.143 BaseBdev2 00:09:28.143 13:33:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:28.143 13:33:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:28.143 13:33:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:28.143 13:33:07 -- common/autotest_common.sh@889 -- # local i 00:09:28.144 13:33:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:28.144 13:33:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:28.144 13:33:07 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.402 13:33:07 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.661 [ 00:09:28.661 { 00:09:28.661 "name": "BaseBdev2", 00:09:28.661 "aliases": [ 00:09:28.661 "f13ca668-3ec0-11ef-b9c4-5b09e08d4792" 00:09:28.661 ], 00:09:28.661 "product_name": "Malloc disk", 00:09:28.661 "block_size": 512, 00:09:28.661 "num_blocks": 65536, 00:09:28.661 "uuid": "f13ca668-3ec0-11ef-b9c4-5b09e08d4792", 00:09:28.661 "assigned_rate_limits": { 00:09:28.661 "rw_ios_per_sec": 0, 00:09:28.661 "rw_mbytes_per_sec": 0, 00:09:28.661 "r_mbytes_per_sec": 0, 00:09:28.661 "w_mbytes_per_sec": 0 00:09:28.661 }, 00:09:28.661 "claimed": true, 00:09:28.661 "claim_type": "exclusive_write", 00:09:28.661 "zoned": false, 00:09:28.661 "supported_io_types": { 00:09:28.661 "read": true, 00:09:28.661 "write": true, 00:09:28.661 "unmap": true, 00:09:28.661 "write_zeroes": true, 00:09:28.661 "flush": true, 00:09:28.661 "reset": true, 00:09:28.661 "compare": false, 00:09:28.661 "compare_and_write": false, 00:09:28.661 "abort": true, 00:09:28.661 "nvme_admin": false, 00:09:28.661 "nvme_io": false 00:09:28.661 }, 00:09:28.661 "memory_domains": [ 00:09:28.661 { 00:09:28.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.661 "dma_device_type": 2 00:09:28.661 } 00:09:28.661 ], 00:09:28.661 "driver_specific": {} 00:09:28.661 } 00:09:28.661 ] 00:09:28.661 13:33:07 -- common/autotest_common.sh@895 -- # return 0 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.661 13:33:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.661 13:33:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:28.661 "name": "Existed_Raid", 00:09:28.661 "uuid": "f0db9809-3ec0-11ef-b9c4-5b09e08d4792", 00:09:28.661 "strip_size_kb": 0, 00:09:28.661 "state": "configuring", 00:09:28.661 "raid_level": "raid1", 00:09:28.661 "superblock": true, 00:09:28.661 "num_base_bdevs": 4, 00:09:28.661 "num_base_bdevs_discovered": 2, 00:09:28.662 "num_base_bdevs_operational": 4, 00:09:28.662 "base_bdevs_list": [ 00:09:28.662 { 00:09:28.662 "name": "BaseBdev1", 00:09:28.662 "uuid": "f0844d14-3ec0-11ef-b9c4-5b09e08d4792", 00:09:28.662 "is_configured": true, 00:09:28.662 "data_offset": 2048, 00:09:28.662 "data_size": 63488 00:09:28.662 }, 00:09:28.662 { 00:09:28.662 "name": "BaseBdev2", 00:09:28.662 "uuid": "f13ca668-3ec0-11ef-b9c4-5b09e08d4792", 00:09:28.662 "is_configured": true, 00:09:28.662 "data_offset": 2048, 00:09:28.662 "data_size": 63488 00:09:28.662 }, 00:09:28.662 { 00:09:28.662 "name": "BaseBdev3", 00:09:28.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.662 "is_configured": false, 00:09:28.662 "data_offset": 0, 00:09:28.662 "data_size": 0 00:09:28.662 }, 00:09:28.662 { 00:09:28.662 "name": "BaseBdev4", 00:09:28.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.662 "is_configured": false, 00:09:28.662 "data_offset": 0, 00:09:28.662 "data_size": 0 00:09:28.662 } 00:09:28.662 ] 00:09:28.662 }' 00:09:28.662 13:33:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:28.662 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.229 13:33:08 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.229 [2024-07-10 13:33:08.470336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.229 BaseBdev3 00:09:29.229 13:33:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:29.229 13:33:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:29.229 13:33:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:29.229 13:33:08 -- common/autotest_common.sh@889 -- # local i 00:09:29.229 13:33:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:29.229 13:33:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:29.229 13:33:08 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:29.487 13:33:08 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.747 [ 00:09:29.747 { 00:09:29.748 "name": "BaseBdev3", 00:09:29.748 "aliases": [ 00:09:29.748 "f1e48426-3ec0-11ef-b9c4-5b09e08d4792" 00:09:29.748 ], 00:09:29.748 "product_name": "Malloc disk", 00:09:29.748 "block_size": 512, 00:09:29.748 "num_blocks": 65536, 00:09:29.748 "uuid": "f1e48426-3ec0-11ef-b9c4-5b09e08d4792", 00:09:29.748 "assigned_rate_limits": { 00:09:29.748 "rw_ios_per_sec": 0, 00:09:29.748 "rw_mbytes_per_sec": 0, 00:09:29.748 "r_mbytes_per_sec": 0, 00:09:29.748 "w_mbytes_per_sec": 0 00:09:29.748 }, 00:09:29.748 "claimed": true, 00:09:29.748 "claim_type": "exclusive_write", 00:09:29.748 "zoned": false, 00:09:29.748 "supported_io_types": { 00:09:29.748 "read": true, 00:09:29.748 "write": true, 00:09:29.748 "unmap": true, 00:09:29.748 "write_zeroes": true, 00:09:29.748 "flush": true, 00:09:29.748 "reset": true, 00:09:29.748 "compare": false, 00:09:29.748 "compare_and_write": false, 00:09:29.748 "abort": true, 00:09:29.748 "nvme_admin": false, 00:09:29.748 "nvme_io": false 00:09:29.748 }, 00:09:29.748 "memory_domains": [ 00:09:29.748 { 00:09:29.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.748 "dma_device_type": 2 00:09:29.748 } 00:09:29.748 ], 00:09:29.748 "driver_specific": {} 00:09:29.748 } 00:09:29.748 ] 00:09:29.748 13:33:08 -- common/autotest_common.sh@895 -- # return 0 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.748 13:33:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.748 13:33:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:29.748 "name": "Existed_Raid", 00:09:29.748 "uuid": "f0db9809-3ec0-11ef-b9c4-5b09e08d4792", 00:09:29.748 "strip_size_kb": 0, 00:09:29.748 "state": "configuring", 00:09:29.748 "raid_level": "raid1", 00:09:29.748 "superblock": true, 00:09:29.748 "num_base_bdevs": 4, 00:09:29.748 "num_base_bdevs_discovered": 3, 00:09:29.748 "num_base_bdevs_operational": 4, 00:09:29.748 "base_bdevs_list": [ 00:09:29.748 { 00:09:29.748 "name": "BaseBdev1", 00:09:29.748 "uuid": "f0844d14-3ec0-11ef-b9c4-5b09e08d4792", 00:09:29.748 "is_configured": true, 00:09:29.748 "data_offset": 2048, 00:09:29.748 "data_size": 63488 00:09:29.748 }, 00:09:29.748 { 00:09:29.748 "name": "BaseBdev2", 00:09:29.748 "uuid": "f13ca668-3ec0-11ef-b9c4-5b09e08d4792", 00:09:29.748 "is_configured": true, 00:09:29.748 "data_offset": 2048, 00:09:29.748 "data_size": 63488 00:09:29.748 }, 00:09:29.748 { 00:09:29.748 "name": "BaseBdev3", 00:09:29.748 "uuid": "f1e48426-3ec0-11ef-b9c4-5b09e08d4792", 00:09:29.748 "is_configured": true, 00:09:29.748 "data_offset": 2048, 00:09:29.748 "data_size": 63488 00:09:29.748 }, 00:09:29.748 { 00:09:29.748 "name": "BaseBdev4", 00:09:29.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.748 "is_configured": false, 00:09:29.748 "data_offset": 0, 00:09:29.748 "data_size": 0 00:09:29.748 } 00:09:29.748 ] 00:09:29.748 }' 00:09:29.748 13:33:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:29.748 13:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:30.009 13:33:09 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.268 [2024-07-10 13:33:09.518501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.268 [2024-07-10 13:33:09.518552] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cec4a00 00:09:30.268 [2024-07-10 13:33:09.518556] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.268 [2024-07-10 13:33:09.518572] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cf27ec0 00:09:30.268 [2024-07-10 13:33:09.518605] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cec4a00 00:09:30.268 [2024-07-10 13:33:09.518607] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cec4a00 00:09:30.268 [2024-07-10 13:33:09.518621] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.268 BaseBdev4 00:09:30.268 13:33:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:30.268 13:33:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:30.268 13:33:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:30.268 13:33:09 -- common/autotest_common.sh@889 -- # local i 00:09:30.268 13:33:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:30.268 13:33:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:30.268 13:33:09 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:30.528 13:33:09 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.528 [ 00:09:30.528 { 00:09:30.528 "name": "BaseBdev4", 00:09:30.528 "aliases": [ 00:09:30.528 "f284744a-3ec0-11ef-b9c4-5b09e08d4792" 00:09:30.528 ], 00:09:30.528 "product_name": "Malloc disk", 00:09:30.528 "block_size": 512, 00:09:30.528 "num_blocks": 65536, 00:09:30.528 "uuid": "f284744a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.528 "assigned_rate_limits": { 00:09:30.528 "rw_ios_per_sec": 0, 00:09:30.528 "rw_mbytes_per_sec": 0, 00:09:30.528 "r_mbytes_per_sec": 0, 00:09:30.528 "w_mbytes_per_sec": 0 00:09:30.528 }, 00:09:30.528 "claimed": true, 00:09:30.528 "claim_type": "exclusive_write", 00:09:30.528 "zoned": false, 00:09:30.528 "supported_io_types": { 00:09:30.528 "read": true, 00:09:30.528 "write": true, 00:09:30.528 "unmap": true, 00:09:30.528 "write_zeroes": true, 00:09:30.528 "flush": true, 00:09:30.528 "reset": true, 00:09:30.528 "compare": false, 00:09:30.528 "compare_and_write": false, 00:09:30.528 "abort": true, 00:09:30.528 "nvme_admin": false, 00:09:30.528 "nvme_io": false 00:09:30.528 }, 00:09:30.528 "memory_domains": [ 00:09:30.528 { 00:09:30.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.528 "dma_device_type": 2 00:09:30.528 } 00:09:30.528 ], 00:09:30.528 "driver_specific": {} 00:09:30.528 } 00:09:30.528 ] 00:09:30.528 13:33:09 -- common/autotest_common.sh@895 -- # return 0 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:30.528 13:33:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:30.786 13:33:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.786 13:33:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.787 13:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:30.787 "name": "Existed_Raid", 00:09:30.787 "uuid": "f0db9809-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.787 "strip_size_kb": 0, 00:09:30.787 "state": "online", 00:09:30.787 "raid_level": "raid1", 00:09:30.787 "superblock": true, 00:09:30.787 "num_base_bdevs": 4, 00:09:30.787 "num_base_bdevs_discovered": 4, 00:09:30.787 "num_base_bdevs_operational": 4, 00:09:30.787 "base_bdevs_list": [ 00:09:30.787 { 00:09:30.787 "name": "BaseBdev1", 00:09:30.787 "uuid": "f0844d14-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.787 "is_configured": true, 00:09:30.787 "data_offset": 2048, 00:09:30.787 "data_size": 63488 00:09:30.787 }, 00:09:30.787 { 00:09:30.787 "name": "BaseBdev2", 00:09:30.787 "uuid": "f13ca668-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.787 "is_configured": true, 00:09:30.787 "data_offset": 2048, 00:09:30.787 "data_size": 63488 00:09:30.787 }, 00:09:30.787 { 00:09:30.787 "name": "BaseBdev3", 00:09:30.787 "uuid": "f1e48426-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.787 "is_configured": true, 00:09:30.787 "data_offset": 2048, 00:09:30.787 "data_size": 63488 00:09:30.787 }, 00:09:30.787 { 00:09:30.787 "name": "BaseBdev4", 00:09:30.787 "uuid": "f284744a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:30.787 "is_configured": true, 00:09:30.787 "data_offset": 2048, 00:09:30.787 "data_size": 63488 00:09:30.787 } 00:09:30.787 ] 00:09:30.787 }' 00:09:30.787 13:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:30.787 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 13:33:10 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:31.305 [2024-07-10 13:33:10.506558] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.305 13:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.564 13:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:31.564 "name": "Existed_Raid", 00:09:31.564 "uuid": "f0db9809-3ec0-11ef-b9c4-5b09e08d4792", 00:09:31.564 "strip_size_kb": 0, 00:09:31.564 "state": "online", 00:09:31.564 "raid_level": "raid1", 00:09:31.564 "superblock": true, 00:09:31.564 "num_base_bdevs": 4, 00:09:31.564 "num_base_bdevs_discovered": 3, 00:09:31.564 "num_base_bdevs_operational": 3, 00:09:31.564 "base_bdevs_list": [ 00:09:31.564 { 00:09:31.564 "name": null, 00:09:31.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.564 "is_configured": false, 00:09:31.564 "data_offset": 2048, 00:09:31.564 "data_size": 63488 00:09:31.564 }, 00:09:31.564 { 00:09:31.564 "name": "BaseBdev2", 00:09:31.564 "uuid": "f13ca668-3ec0-11ef-b9c4-5b09e08d4792", 00:09:31.564 "is_configured": true, 00:09:31.564 "data_offset": 2048, 00:09:31.564 "data_size": 63488 00:09:31.564 }, 00:09:31.564 { 00:09:31.564 "name": "BaseBdev3", 00:09:31.564 "uuid": "f1e48426-3ec0-11ef-b9c4-5b09e08d4792", 00:09:31.564 "is_configured": true, 00:09:31.564 "data_offset": 2048, 00:09:31.564 "data_size": 63488 00:09:31.564 }, 00:09:31.564 { 00:09:31.564 "name": "BaseBdev4", 00:09:31.564 "uuid": "f284744a-3ec0-11ef-b9c4-5b09e08d4792", 00:09:31.564 "is_configured": true, 00:09:31.564 "data_offset": 2048, 00:09:31.564 "data_size": 63488 00:09:31.564 } 00:09:31.564 ] 00:09:31.564 }' 00:09:31.564 13:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:31.564 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:09:31.823 13:33:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:31.823 13:33:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:31.823 13:33:10 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.823 13:33:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:32.081 [2024-07-10 13:33:11.343284] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.081 13:33:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:32.340 13:33:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:32.340 13:33:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.340 13:33:11 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:32.340 [2024-07-10 13:33:11.695916] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.599 13:33:11 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:32.858 [2024-07-10 13:33:12.064571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:32.858 [2024-07-10 13:33:12.064591] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.858 [2024-07-10 13:33:12.064600] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.858 [2024-07-10 13:33:12.069240] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.858 [2024-07-10 13:33:12.069257] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cec4a00 name Existed_Raid, state offline 00:09:32.858 13:33:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:32.858 13:33:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:32.858 13:33:12 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.858 13:33:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:33.116 13:33:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:33.116 13:33:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:33.116 13:33:12 -- bdev/bdev_raid.sh@287 -- # killprocess 53156 00:09:33.116 13:33:12 -- common/autotest_common.sh@926 -- # '[' -z 53156 ']' 00:09:33.116 13:33:12 -- common/autotest_common.sh@930 -- # kill -0 53156 00:09:33.116 13:33:12 -- common/autotest_common.sh@931 -- # uname 00:09:33.116 13:33:12 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:33.116 13:33:12 -- common/autotest_common.sh@934 -- # ps -c -o command 53156 00:09:33.116 13:33:12 -- common/autotest_common.sh@934 -- # tail -1 00:09:33.116 13:33:12 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:33.116 13:33:12 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:33.116 killing process with pid 53156 00:09:33.116 13:33:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53156' 00:09:33.116 13:33:12 -- common/autotest_common.sh@945 -- # kill 53156 00:09:33.116 [2024-07-10 13:33:12.294165] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.116 [2024-07-10 13:33:12.294197] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.116 13:33:12 -- common/autotest_common.sh@950 -- # wait 53156 00:09:33.116 13:33:12 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:33.116 00:09:33.116 real 0m9.828s 00:09:33.116 user 0m17.362s 00:09:33.116 sys 0m1.593s 00:09:33.116 13:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.116 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:33.116 ************************************ 00:09:33.116 END TEST raid_state_function_test_sb 00:09:33.116 ************************************ 00:09:33.116 13:33:12 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:09:33.116 13:33:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:33.116 13:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.116 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:33.375 ************************************ 00:09:33.375 START TEST raid_superblock_test 00:09:33.375 ************************************ 00:09:33.375 13:33:12 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@357 -- # raid_pid=53429 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53429 /var/tmp/spdk-raid.sock 00:09:33.375 13:33:12 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:33.375 13:33:12 -- common/autotest_common.sh@819 -- # '[' -z 53429 ']' 00:09:33.375 13:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:33.375 13:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:33.375 13:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:33.375 13:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.375 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:33.375 [2024-07-10 13:33:12.505396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:33.375 [2024-07-10 13:33:12.505759] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:33.633 EAL: TSC is not safe to use in SMP mode 00:09:33.633 EAL: TSC is not invariant 00:09:33.633 [2024-07-10 13:33:12.944419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.889 [2024-07-10 13:33:13.029792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.889 [2024-07-10 13:33:13.030270] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.889 [2024-07-10 13:33:13.030284] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.146 13:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.146 13:33:13 -- common/autotest_common.sh@852 -- # return 0 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.146 13:33:13 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:34.403 malloc1 00:09:34.403 13:33:13 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.730 [2024-07-10 13:33:13.821425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.730 [2024-07-10 13:33:13.821502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.731 [2024-07-10 13:33:13.822010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacb780 00:09:34.731 [2024-07-10 13:33:13.822034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.731 [2024-07-10 13:33:13.822718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.731 [2024-07-10 13:33:13.822749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.731 pt1 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.731 13:33:13 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:34.731 malloc2 00:09:34.731 13:33:14 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.989 [2024-07-10 13:33:14.201453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.989 [2024-07-10 13:33:14.201502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.989 [2024-07-10 13:33:14.201540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacbc80 00:09:34.989 [2024-07-10 13:33:14.201546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.989 [2024-07-10 13:33:14.201977] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.989 [2024-07-10 13:33:14.202005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.989 pt2 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.989 13:33:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:35.247 malloc3 00:09:35.247 13:33:14 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.505 [2024-07-10 13:33:14.633496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.505 [2024-07-10 13:33:14.633559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.505 [2024-07-10 13:33:14.633581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc180 00:09:35.505 [2024-07-10 13:33:14.633586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.505 [2024-07-10 13:33:14.634011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.505 [2024-07-10 13:33:14.634035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.505 pt3 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:35.505 malloc4 00:09:35.505 13:33:14 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:35.764 [2024-07-10 13:33:14.989535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:35.764 [2024-07-10 13:33:14.989580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.764 [2024-07-10 13:33:14.989601] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc680 00:09:35.764 [2024-07-10 13:33:14.989606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.764 [2024-07-10 13:33:14.990034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.764 [2024-07-10 13:33:14.990061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:35.764 pt4 00:09:35.764 13:33:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:35.764 13:33:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:35.764 13:33:15 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:36.022 [2024-07-10 13:33:15.177560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.022 [2024-07-10 13:33:15.177944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.022 [2024-07-10 13:33:15.177961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.022 [2024-07-10 13:33:15.177969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:36.022 [2024-07-10 13:33:15.178018] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cacc900 00:09:36.022 [2024-07-10 13:33:15.178028] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.022 [2024-07-10 13:33:15.178054] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb2ee20 00:09:36.022 [2024-07-10 13:33:15.178106] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cacc900 00:09:36.022 [2024-07-10 13:33:15.178113] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cacc900 00:09:36.022 [2024-07-10 13:33:15.178129] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:36.022 "name": "raid_bdev1", 00:09:36.022 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:36.022 "strip_size_kb": 0, 00:09:36.022 "state": "online", 00:09:36.022 "raid_level": "raid1", 00:09:36.022 "superblock": true, 00:09:36.022 "num_base_bdevs": 4, 00:09:36.022 "num_base_bdevs_discovered": 4, 00:09:36.022 "num_base_bdevs_operational": 4, 00:09:36.022 "base_bdevs_list": [ 00:09:36.022 { 00:09:36.022 "name": "pt1", 00:09:36.022 "uuid": "157a7dd7-2daf-d659-af99-6913a5532414", 00:09:36.022 "is_configured": true, 00:09:36.022 "data_offset": 2048, 00:09:36.022 "data_size": 63488 00:09:36.022 }, 00:09:36.022 { 00:09:36.022 "name": "pt2", 00:09:36.022 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:36.022 "is_configured": true, 00:09:36.022 "data_offset": 2048, 00:09:36.022 "data_size": 63488 00:09:36.022 }, 00:09:36.022 { 00:09:36.022 "name": "pt3", 00:09:36.022 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:36.022 "is_configured": true, 00:09:36.022 "data_offset": 2048, 00:09:36.022 "data_size": 63488 00:09:36.022 }, 00:09:36.022 { 00:09:36.022 "name": "pt4", 00:09:36.022 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:36.022 "is_configured": true, 00:09:36.022 "data_offset": 2048, 00:09:36.022 "data_size": 63488 00:09:36.022 } 00:09:36.022 ] 00:09:36.022 }' 00:09:36.022 13:33:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:36.022 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:09:36.280 13:33:15 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:36.280 13:33:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:36.538 [2024-07-10 13:33:15.817641] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.538 13:33:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 00:09:36.538 13:33:15 -- bdev/bdev_raid.sh@380 -- # '[' -z f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:09:36.538 13:33:15 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:36.797 [2024-07-10 13:33:16.009640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.797 [2024-07-10 13:33:16.009659] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.797 [2024-07-10 13:33:16.009670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.797 [2024-07-10 13:33:16.009698] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.797 [2024-07-10 13:33:16.009701] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacc900 name raid_bdev1, state offline 00:09:36.797 13:33:16 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.797 13:33:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.056 13:33:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:37.315 13:33:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.315 13:33:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:37.574 13:33:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.574 13:33:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:37.574 13:33:16 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:37.574 13:33:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:37.833 13:33:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:37.833 13:33:17 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:37.833 13:33:17 -- common/autotest_common.sh@640 -- # local es=0 00:09:37.833 13:33:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:37.833 13:33:17 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.833 13:33:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.833 13:33:17 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.833 13:33:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.833 13:33:17 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.833 13:33:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.833 13:33:17 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.833 13:33:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.833 13:33:17 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:38.092 [2024-07-10 13:33:17.269782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:38.092 [2024-07-10 13:33:17.270201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:38.092 [2024-07-10 13:33:17.270217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:38.092 [2024-07-10 13:33:17.270223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:38.092 [2024-07-10 13:33:17.270233] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:38.092 [2024-07-10 13:33:17.270265] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:38.092 [2024-07-10 13:33:17.270273] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:38.092 [2024-07-10 13:33:17.270280] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:38.092 [2024-07-10 13:33:17.270286] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.092 [2024-07-10 13:33:17.270289] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacc680 name raid_bdev1, state configuring 00:09:38.092 request: 00:09:38.092 { 00:09:38.092 "name": "raid_bdev1", 00:09:38.092 "raid_level": "raid1", 00:09:38.092 "base_bdevs": [ 00:09:38.092 "malloc1", 00:09:38.092 "malloc2", 00:09:38.092 "malloc3", 00:09:38.092 "malloc4" 00:09:38.092 ], 00:09:38.092 "superblock": false, 00:09:38.092 "method": "bdev_raid_create", 00:09:38.092 "req_id": 1 00:09:38.092 } 00:09:38.092 Got JSON-RPC error response 00:09:38.092 response: 00:09:38.092 { 00:09:38.092 "code": -17, 00:09:38.092 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:38.092 } 00:09:38.092 13:33:17 -- common/autotest_common.sh@643 -- # es=1 00:09:38.092 13:33:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:38.092 13:33:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:38.092 13:33:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:38.092 13:33:17 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.092 13:33:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:38.092 13:33:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:38.092 13:33:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:38.092 13:33:17 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.352 [2024-07-10 13:33:17.609824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.352 [2024-07-10 13:33:17.609866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.352 [2024-07-10 13:33:17.609888] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc180 00:09:38.352 [2024-07-10 13:33:17.609894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.352 [2024-07-10 13:33:17.610370] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.352 [2024-07-10 13:33:17.610396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.352 [2024-07-10 13:33:17.610412] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:38.352 [2024-07-10 13:33:17.610420] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.352 pt1 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.352 13:33:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.611 13:33:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:38.611 "name": "raid_bdev1", 00:09:38.611 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:38.611 "strip_size_kb": 0, 00:09:38.611 "state": "configuring", 00:09:38.611 "raid_level": "raid1", 00:09:38.611 "superblock": true, 00:09:38.611 "num_base_bdevs": 4, 00:09:38.611 "num_base_bdevs_discovered": 1, 00:09:38.611 "num_base_bdevs_operational": 4, 00:09:38.611 "base_bdevs_list": [ 00:09:38.611 { 00:09:38.611 "name": "pt1", 00:09:38.611 "uuid": "157a7dd7-2daf-d659-af99-6913a5532414", 00:09:38.611 "is_configured": true, 00:09:38.611 "data_offset": 2048, 00:09:38.611 "data_size": 63488 00:09:38.611 }, 00:09:38.611 { 00:09:38.611 "name": null, 00:09:38.611 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:38.611 "is_configured": false, 00:09:38.611 "data_offset": 2048, 00:09:38.611 "data_size": 63488 00:09:38.611 }, 00:09:38.611 { 00:09:38.611 "name": null, 00:09:38.611 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:38.611 "is_configured": false, 00:09:38.611 "data_offset": 2048, 00:09:38.611 "data_size": 63488 00:09:38.611 }, 00:09:38.611 { 00:09:38.611 "name": null, 00:09:38.611 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:38.611 "is_configured": false, 00:09:38.611 "data_offset": 2048, 00:09:38.611 "data_size": 63488 00:09:38.611 } 00:09:38.611 ] 00:09:38.611 }' 00:09:38.611 13:33:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:38.611 13:33:17 -- common/autotest_common.sh@10 -- # set +x 00:09:38.870 13:33:18 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:38.870 13:33:18 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.129 [2024-07-10 13:33:18.249896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.129 [2024-07-10 13:33:18.249971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.129 [2024-07-10 13:33:18.249993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacb780 00:09:39.129 [2024-07-10 13:33:18.249999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.129 [2024-07-10 13:33:18.250069] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.129 [2024-07-10 13:33:18.250076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.129 [2024-07-10 13:33:18.250088] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:39.129 [2024-07-10 13:33:18.250093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.129 pt2 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:39.129 [2024-07-10 13:33:18.437914] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.129 13:33:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.388 13:33:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:39.388 "name": "raid_bdev1", 00:09:39.388 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:39.388 "strip_size_kb": 0, 00:09:39.388 "state": "configuring", 00:09:39.388 "raid_level": "raid1", 00:09:39.388 "superblock": true, 00:09:39.388 "num_base_bdevs": 4, 00:09:39.388 "num_base_bdevs_discovered": 1, 00:09:39.388 "num_base_bdevs_operational": 4, 00:09:39.388 "base_bdevs_list": [ 00:09:39.388 { 00:09:39.388 "name": "pt1", 00:09:39.388 "uuid": "157a7dd7-2daf-d659-af99-6913a5532414", 00:09:39.388 "is_configured": true, 00:09:39.388 "data_offset": 2048, 00:09:39.388 "data_size": 63488 00:09:39.388 }, 00:09:39.388 { 00:09:39.388 "name": null, 00:09:39.388 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:39.388 "is_configured": false, 00:09:39.388 "data_offset": 2048, 00:09:39.388 "data_size": 63488 00:09:39.388 }, 00:09:39.388 { 00:09:39.388 "name": null, 00:09:39.388 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:39.388 "is_configured": false, 00:09:39.388 "data_offset": 2048, 00:09:39.388 "data_size": 63488 00:09:39.388 }, 00:09:39.388 { 00:09:39.388 "name": null, 00:09:39.388 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:39.388 "is_configured": false, 00:09:39.388 "data_offset": 2048, 00:09:39.388 "data_size": 63488 00:09:39.388 } 00:09:39.388 ] 00:09:39.388 }' 00:09:39.388 13:33:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:39.388 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:09:39.646 13:33:18 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:39.646 13:33:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:39.646 13:33:18 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.905 [2024-07-10 13:33:19.105988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.905 [2024-07-10 13:33:19.106028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.905 [2024-07-10 13:33:19.106046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacb780 00:09:39.905 [2024-07-10 13:33:19.106052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.905 [2024-07-10 13:33:19.106111] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.905 [2024-07-10 13:33:19.106117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.905 [2024-07-10 13:33:19.106130] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:39.905 [2024-07-10 13:33:19.106135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.905 pt2 00:09:39.905 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:39.905 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:39.905 13:33:19 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.164 [2024-07-10 13:33:19.294010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.164 [2024-07-10 13:33:19.294048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.164 [2024-07-10 13:33:19.294064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82caccb80 00:09:40.164 [2024-07-10 13:33:19.294070] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.164 [2024-07-10 13:33:19.294122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.164 [2024-07-10 13:33:19.294128] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.164 [2024-07-10 13:33:19.294139] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:40.164 [2024-07-10 13:33:19.294144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.164 pt3 00:09:40.164 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:40.164 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:40.164 13:33:19 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:40.164 [2024-07-10 13:33:19.486075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:40.164 [2024-07-10 13:33:19.486146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.164 [2024-07-10 13:33:19.486182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc900 00:09:40.165 [2024-07-10 13:33:19.486189] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.165 [2024-07-10 13:33:19.486326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.165 [2024-07-10 13:33:19.486335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:40.165 [2024-07-10 13:33:19.486360] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:40.165 [2024-07-10 13:33:19.486371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:40.165 [2024-07-10 13:33:19.486405] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cacbc80 00:09:40.165 [2024-07-10 13:33:19.486408] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.165 [2024-07-10 13:33:19.486439] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb2ee20 00:09:40.165 [2024-07-10 13:33:19.486489] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cacbc80 00:09:40.165 [2024-07-10 13:33:19.486493] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cacbc80 00:09:40.165 [2024-07-10 13:33:19.486510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.165 pt4 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.165 13:33:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.424 13:33:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:40.424 "name": "raid_bdev1", 00:09:40.424 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:40.424 "strip_size_kb": 0, 00:09:40.424 "state": "online", 00:09:40.424 "raid_level": "raid1", 00:09:40.424 "superblock": true, 00:09:40.424 "num_base_bdevs": 4, 00:09:40.424 "num_base_bdevs_discovered": 4, 00:09:40.424 "num_base_bdevs_operational": 4, 00:09:40.424 "base_bdevs_list": [ 00:09:40.424 { 00:09:40.424 "name": "pt1", 00:09:40.424 "uuid": "157a7dd7-2daf-d659-af99-6913a5532414", 00:09:40.424 "is_configured": true, 00:09:40.424 "data_offset": 2048, 00:09:40.424 "data_size": 63488 00:09:40.424 }, 00:09:40.424 { 00:09:40.424 "name": "pt2", 00:09:40.424 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:40.424 "is_configured": true, 00:09:40.424 "data_offset": 2048, 00:09:40.424 "data_size": 63488 00:09:40.424 }, 00:09:40.424 { 00:09:40.424 "name": "pt3", 00:09:40.424 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:40.424 "is_configured": true, 00:09:40.424 "data_offset": 2048, 00:09:40.424 "data_size": 63488 00:09:40.424 }, 00:09:40.424 { 00:09:40.424 "name": "pt4", 00:09:40.424 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:40.424 "is_configured": true, 00:09:40.424 "data_offset": 2048, 00:09:40.424 "data_size": 63488 00:09:40.424 } 00:09:40.424 ] 00:09:40.424 }' 00:09:40.424 13:33:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:40.424 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:40.685 13:33:19 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:40.685 13:33:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:40.945 [2024-07-10 13:33:20.150162] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.945 13:33:20 -- bdev/bdev_raid.sh@430 -- # '[' f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 '!=' f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:09:40.945 13:33:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:09:40.945 13:33:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:40.945 13:33:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:40.945 13:33:20 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:41.204 [2024-07-10 13:33:20.342128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:41.204 "name": "raid_bdev1", 00:09:41.204 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:41.204 "strip_size_kb": 0, 00:09:41.204 "state": "online", 00:09:41.204 "raid_level": "raid1", 00:09:41.204 "superblock": true, 00:09:41.204 "num_base_bdevs": 4, 00:09:41.204 "num_base_bdevs_discovered": 3, 00:09:41.204 "num_base_bdevs_operational": 3, 00:09:41.204 "base_bdevs_list": [ 00:09:41.204 { 00:09:41.204 "name": null, 00:09:41.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.204 "is_configured": false, 00:09:41.204 "data_offset": 2048, 00:09:41.204 "data_size": 63488 00:09:41.204 }, 00:09:41.204 { 00:09:41.204 "name": "pt2", 00:09:41.204 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:41.204 "is_configured": true, 00:09:41.204 "data_offset": 2048, 00:09:41.204 "data_size": 63488 00:09:41.204 }, 00:09:41.204 { 00:09:41.204 "name": "pt3", 00:09:41.204 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:41.204 "is_configured": true, 00:09:41.204 "data_offset": 2048, 00:09:41.204 "data_size": 63488 00:09:41.204 }, 00:09:41.204 { 00:09:41.204 "name": "pt4", 00:09:41.204 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:41.204 "is_configured": true, 00:09:41.204 "data_offset": 2048, 00:09:41.204 "data_size": 63488 00:09:41.204 } 00:09:41.204 ] 00:09:41.204 }' 00:09:41.204 13:33:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:41.204 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:09:41.462 13:33:20 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:41.723 [2024-07-10 13:33:20.974214] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.723 [2024-07-10 13:33:20.974247] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.723 [2024-07-10 13:33:20.974271] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.723 [2024-07-10 13:33:20.974290] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.723 [2024-07-10 13:33:20.974294] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacbc80 name raid_bdev1, state offline 00:09:41.723 13:33:20 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.723 13:33:20 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:09:41.983 13:33:21 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:09:41.983 13:33:21 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:09:41.983 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:09:41.983 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:41.983 13:33:21 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:42.244 13:33:21 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:42.504 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:42.504 13:33:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:42.504 13:33:21 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:09:42.504 13:33:21 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:42.504 13:33:21 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.764 [2024-07-10 13:33:21.878288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.764 [2024-07-10 13:33:21.878344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.764 [2024-07-10 13:33:21.878374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc900 00:09:42.764 [2024-07-10 13:33:21.878380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.764 [2024-07-10 13:33:21.878884] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.764 [2024-07-10 13:33:21.878911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.764 [2024-07-10 13:33:21.878932] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:42.764 [2024-07-10 13:33:21.878954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.764 pt2 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.764 13:33:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.764 13:33:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:42.764 "name": "raid_bdev1", 00:09:42.764 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:42.764 "strip_size_kb": 0, 00:09:42.764 "state": "configuring", 00:09:42.764 "raid_level": "raid1", 00:09:42.764 "superblock": true, 00:09:42.764 "num_base_bdevs": 4, 00:09:42.764 "num_base_bdevs_discovered": 1, 00:09:42.764 "num_base_bdevs_operational": 3, 00:09:42.764 "base_bdevs_list": [ 00:09:42.764 { 00:09:42.764 "name": null, 00:09:42.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.764 "is_configured": false, 00:09:42.764 "data_offset": 2048, 00:09:42.764 "data_size": 63488 00:09:42.764 }, 00:09:42.764 { 00:09:42.764 "name": "pt2", 00:09:42.764 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:42.764 "is_configured": true, 00:09:42.764 "data_offset": 2048, 00:09:42.764 "data_size": 63488 00:09:42.764 }, 00:09:42.764 { 00:09:42.764 "name": null, 00:09:42.764 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:42.764 "is_configured": false, 00:09:42.764 "data_offset": 2048, 00:09:42.764 "data_size": 63488 00:09:42.764 }, 00:09:42.764 { 00:09:42.764 "name": null, 00:09:42.764 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:42.764 "is_configured": false, 00:09:42.764 "data_offset": 2048, 00:09:42.764 "data_size": 63488 00:09:42.764 } 00:09:42.764 ] 00:09:42.764 }' 00:09:42.764 13:33:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:42.764 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:09:43.024 13:33:22 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:09:43.024 13:33:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:43.024 13:33:22 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.284 [2024-07-10 13:33:22.550372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.284 [2024-07-10 13:33:22.550427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.284 [2024-07-10 13:33:22.550453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc680 00:09:43.284 [2024-07-10 13:33:22.550458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.284 [2024-07-10 13:33:22.550567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.284 [2024-07-10 13:33:22.550574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.284 [2024-07-10 13:33:22.550592] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:43.284 [2024-07-10 13:33:22.550599] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.284 pt3 00:09:43.284 13:33:22 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:43.284 13:33:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:43.284 13:33:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.285 13:33:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.545 13:33:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:43.545 "name": "raid_bdev1", 00:09:43.545 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:43.545 "strip_size_kb": 0, 00:09:43.545 "state": "configuring", 00:09:43.545 "raid_level": "raid1", 00:09:43.545 "superblock": true, 00:09:43.545 "num_base_bdevs": 4, 00:09:43.545 "num_base_bdevs_discovered": 2, 00:09:43.545 "num_base_bdevs_operational": 3, 00:09:43.545 "base_bdevs_list": [ 00:09:43.545 { 00:09:43.545 "name": null, 00:09:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.545 "is_configured": false, 00:09:43.545 "data_offset": 2048, 00:09:43.545 "data_size": 63488 00:09:43.545 }, 00:09:43.545 { 00:09:43.545 "name": "pt2", 00:09:43.545 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:43.545 "is_configured": true, 00:09:43.545 "data_offset": 2048, 00:09:43.545 "data_size": 63488 00:09:43.545 }, 00:09:43.545 { 00:09:43.545 "name": "pt3", 00:09:43.545 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:43.545 "is_configured": true, 00:09:43.545 "data_offset": 2048, 00:09:43.545 "data_size": 63488 00:09:43.545 }, 00:09:43.545 { 00:09:43.545 "name": null, 00:09:43.545 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:43.545 "is_configured": false, 00:09:43.545 "data_offset": 2048, 00:09:43.545 "data_size": 63488 00:09:43.545 } 00:09:43.545 ] 00:09:43.545 }' 00:09:43.545 13:33:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:43.545 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:09:43.804 13:33:23 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:09:43.804 13:33:23 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:43.804 13:33:23 -- bdev/bdev_raid.sh@462 -- # i=3 00:09:43.804 13:33:23 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:44.063 [2024-07-10 13:33:23.194439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:44.063 [2024-07-10 13:33:23.194518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.063 [2024-07-10 13:33:23.194543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacbc80 00:09:44.063 [2024-07-10 13:33:23.194549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.063 [2024-07-10 13:33:23.194641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.063 [2024-07-10 13:33:23.194669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:44.063 [2024-07-10 13:33:23.194686] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:44.063 [2024-07-10 13:33:23.194692] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:44.063 [2024-07-10 13:33:23.194716] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cacb780 00:09:44.063 [2024-07-10 13:33:23.194722] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.063 [2024-07-10 13:33:23.194737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb2ee20 00:09:44.063 [2024-07-10 13:33:23.194772] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cacb780 00:09:44.063 [2024-07-10 13:33:23.194779] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cacb780 00:09:44.063 [2024-07-10 13:33:23.194794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.063 pt4 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.063 13:33:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.064 13:33:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.064 "name": "raid_bdev1", 00:09:44.064 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:44.064 "strip_size_kb": 0, 00:09:44.064 "state": "online", 00:09:44.064 "raid_level": "raid1", 00:09:44.064 "superblock": true, 00:09:44.064 "num_base_bdevs": 4, 00:09:44.064 "num_base_bdevs_discovered": 3, 00:09:44.064 "num_base_bdevs_operational": 3, 00:09:44.064 "base_bdevs_list": [ 00:09:44.064 { 00:09:44.064 "name": null, 00:09:44.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.064 "is_configured": false, 00:09:44.064 "data_offset": 2048, 00:09:44.064 "data_size": 63488 00:09:44.064 }, 00:09:44.064 { 00:09:44.064 "name": "pt2", 00:09:44.064 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:44.064 "is_configured": true, 00:09:44.064 "data_offset": 2048, 00:09:44.064 "data_size": 63488 00:09:44.064 }, 00:09:44.064 { 00:09:44.064 "name": "pt3", 00:09:44.064 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:44.064 "is_configured": true, 00:09:44.064 "data_offset": 2048, 00:09:44.064 "data_size": 63488 00:09:44.064 }, 00:09:44.064 { 00:09:44.064 "name": "pt4", 00:09:44.064 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:44.064 "is_configured": true, 00:09:44.064 "data_offset": 2048, 00:09:44.064 "data_size": 63488 00:09:44.064 } 00:09:44.064 ] 00:09:44.064 }' 00:09:44.064 13:33:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.064 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:09:44.323 13:33:23 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:09:44.323 13:33:23 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:44.583 [2024-07-10 13:33:23.842513] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.583 [2024-07-10 13:33:23.842539] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.583 [2024-07-10 13:33:23.842575] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.583 [2024-07-10 13:33:23.842591] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.583 [2024-07-10 13:33:23.842595] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacb780 name raid_bdev1, state offline 00:09:44.583 13:33:23 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.583 13:33:23 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:09:44.843 13:33:24 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:09:44.843 13:33:24 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:09:44.843 13:33:24 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.102 [2024-07-10 13:33:24.222581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.102 [2024-07-10 13:33:24.222630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.102 [2024-07-10 13:33:24.222670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82caccb80 00:09:45.102 [2024-07-10 13:33:24.222676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.102 [2024-07-10 13:33:24.223174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.102 [2024-07-10 13:33:24.223206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.102 [2024-07-10 13:33:24.223225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:45.102 [2024-07-10 13:33:24.223234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.102 pt1 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:45.102 "name": "raid_bdev1", 00:09:45.102 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:45.102 "strip_size_kb": 0, 00:09:45.102 "state": "configuring", 00:09:45.102 "raid_level": "raid1", 00:09:45.102 "superblock": true, 00:09:45.102 "num_base_bdevs": 4, 00:09:45.102 "num_base_bdevs_discovered": 1, 00:09:45.102 "num_base_bdevs_operational": 4, 00:09:45.102 "base_bdevs_list": [ 00:09:45.102 { 00:09:45.102 "name": "pt1", 00:09:45.102 "uuid": "157a7dd7-2daf-d659-af99-6913a5532414", 00:09:45.102 "is_configured": true, 00:09:45.102 "data_offset": 2048, 00:09:45.102 "data_size": 63488 00:09:45.102 }, 00:09:45.102 { 00:09:45.102 "name": null, 00:09:45.102 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:45.102 "is_configured": false, 00:09:45.102 "data_offset": 2048, 00:09:45.102 "data_size": 63488 00:09:45.102 }, 00:09:45.102 { 00:09:45.102 "name": null, 00:09:45.102 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:45.102 "is_configured": false, 00:09:45.102 "data_offset": 2048, 00:09:45.102 "data_size": 63488 00:09:45.102 }, 00:09:45.102 { 00:09:45.102 "name": null, 00:09:45.102 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:45.102 "is_configured": false, 00:09:45.102 "data_offset": 2048, 00:09:45.102 "data_size": 63488 00:09:45.102 } 00:09:45.102 ] 00:09:45.102 }' 00:09:45.102 13:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:45.102 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:45.362 13:33:24 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:09:45.362 13:33:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:45.362 13:33:24 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:45.622 13:33:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:45.622 13:33:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:45.622 13:33:24 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:45.881 13:33:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:45.881 13:33:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:45.881 13:33:25 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@489 -- # i=3 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:46.141 [2024-07-10 13:33:25.434732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:46.141 [2024-07-10 13:33:25.434815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.141 [2024-07-10 13:33:25.434841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacbc80 00:09:46.141 [2024-07-10 13:33:25.434847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.141 [2024-07-10 13:33:25.434938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.141 [2024-07-10 13:33:25.434944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:46.141 [2024-07-10 13:33:25.434968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:46.141 [2024-07-10 13:33:25.434973] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:46.141 [2024-07-10 13:33:25.434975] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.141 [2024-07-10 13:33:25.434981] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacc180 name raid_bdev1, state configuring 00:09:46.141 [2024-07-10 13:33:25.434991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:46.141 pt4 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.141 13:33:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.401 13:33:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:46.401 "name": "raid_bdev1", 00:09:46.401 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:46.401 "strip_size_kb": 0, 00:09:46.401 "state": "configuring", 00:09:46.401 "raid_level": "raid1", 00:09:46.401 "superblock": true, 00:09:46.401 "num_base_bdevs": 4, 00:09:46.401 "num_base_bdevs_discovered": 1, 00:09:46.401 "num_base_bdevs_operational": 3, 00:09:46.401 "base_bdevs_list": [ 00:09:46.401 { 00:09:46.401 "name": null, 00:09:46.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.401 "is_configured": false, 00:09:46.401 "data_offset": 2048, 00:09:46.401 "data_size": 63488 00:09:46.401 }, 00:09:46.401 { 00:09:46.401 "name": null, 00:09:46.401 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:46.401 "is_configured": false, 00:09:46.401 "data_offset": 2048, 00:09:46.401 "data_size": 63488 00:09:46.401 }, 00:09:46.401 { 00:09:46.401 "name": null, 00:09:46.401 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:46.401 "is_configured": false, 00:09:46.401 "data_offset": 2048, 00:09:46.401 "data_size": 63488 00:09:46.401 }, 00:09:46.401 { 00:09:46.401 "name": "pt4", 00:09:46.401 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:46.401 "is_configured": true, 00:09:46.401 "data_offset": 2048, 00:09:46.401 "data_size": 63488 00:09:46.401 } 00:09:46.401 ] 00:09:46.401 }' 00:09:46.401 13:33:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:46.401 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:09:46.660 13:33:25 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:09:46.660 13:33:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:09:46.660 13:33:25 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.920 [2024-07-10 13:33:26.094770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.920 [2024-07-10 13:33:26.094812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.920 [2024-07-10 13:33:26.094844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc680 00:09:46.920 [2024-07-10 13:33:26.094850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.920 [2024-07-10 13:33:26.094909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.920 [2024-07-10 13:33:26.094918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.920 [2024-07-10 13:33:26.094930] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:46.920 [2024-07-10 13:33:26.094935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.920 pt2 00:09:46.920 13:33:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:09:46.920 13:33:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:09:46.920 13:33:26 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.920 [2024-07-10 13:33:26.286792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.920 [2024-07-10 13:33:26.286825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.920 [2024-07-10 13:33:26.286854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cacc900 00:09:46.920 [2024-07-10 13:33:26.286859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.920 [2024-07-10 13:33:26.286910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.920 [2024-07-10 13:33:26.286917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.920 [2024-07-10 13:33:26.286928] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:46.920 [2024-07-10 13:33:26.286932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.920 [2024-07-10 13:33:26.286950] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cacc180 00:09:46.920 [2024-07-10 13:33:26.286952] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.920 [2024-07-10 13:33:26.286966] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb2ee20 00:09:46.920 [2024-07-10 13:33:26.286994] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cacc180 00:09:46.920 [2024-07-10 13:33:26.286997] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cacc180 00:09:46.920 [2024-07-10 13:33:26.287010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.180 pt3 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:47.180 "name": "raid_bdev1", 00:09:47.180 "uuid": "f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792", 00:09:47.180 "strip_size_kb": 0, 00:09:47.180 "state": "online", 00:09:47.180 "raid_level": "raid1", 00:09:47.180 "superblock": true, 00:09:47.180 "num_base_bdevs": 4, 00:09:47.180 "num_base_bdevs_discovered": 3, 00:09:47.180 "num_base_bdevs_operational": 3, 00:09:47.180 "base_bdevs_list": [ 00:09:47.180 { 00:09:47.180 "name": null, 00:09:47.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.180 "is_configured": false, 00:09:47.180 "data_offset": 2048, 00:09:47.180 "data_size": 63488 00:09:47.180 }, 00:09:47.180 { 00:09:47.180 "name": "pt2", 00:09:47.180 "uuid": "dea553da-9d86-3254-b9b6-28a83d584200", 00:09:47.180 "is_configured": true, 00:09:47.180 "data_offset": 2048, 00:09:47.180 "data_size": 63488 00:09:47.180 }, 00:09:47.180 { 00:09:47.180 "name": "pt3", 00:09:47.180 "uuid": "75d11148-472d-bc53-b7aa-e407e9e1ec81", 00:09:47.180 "is_configured": true, 00:09:47.180 "data_offset": 2048, 00:09:47.180 "data_size": 63488 00:09:47.180 }, 00:09:47.180 { 00:09:47.180 "name": "pt4", 00:09:47.180 "uuid": "4e2db0ce-d5bc-7857-b81c-202c048f8b30", 00:09:47.180 "is_configured": true, 00:09:47.180 "data_offset": 2048, 00:09:47.180 "data_size": 63488 00:09:47.180 } 00:09:47.180 ] 00:09:47.180 }' 00:09:47.180 13:33:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:47.180 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:09:47.440 13:33:26 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:47.440 13:33:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:09:47.700 [2024-07-10 13:33:26.926878] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.700 13:33:26 -- bdev/bdev_raid.sh@506 -- # '[' f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 '!=' f5e3f74b-3ec0-11ef-b9c4-5b09e08d4792 ']' 00:09:47.700 13:33:26 -- bdev/bdev_raid.sh@511 -- # killprocess 53429 00:09:47.700 13:33:26 -- common/autotest_common.sh@926 -- # '[' -z 53429 ']' 00:09:47.700 13:33:26 -- common/autotest_common.sh@930 -- # kill -0 53429 00:09:47.700 13:33:26 -- common/autotest_common.sh@931 -- # uname 00:09:47.700 13:33:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:47.700 13:33:26 -- common/autotest_common.sh@934 -- # ps -c -o command 53429 00:09:47.700 13:33:26 -- common/autotest_common.sh@934 -- # tail -1 00:09:47.700 13:33:26 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:47.700 13:33:26 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:47.700 killing process with pid 53429 00:09:47.700 13:33:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53429' 00:09:47.700 13:33:26 -- common/autotest_common.sh@945 -- # kill 53429 00:09:47.700 [2024-07-10 13:33:26.958610] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.700 [2024-07-10 13:33:26.958624] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.700 [2024-07-10 13:33:26.958645] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.700 [2024-07-10 13:33:26.958649] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cacc180 name raid_bdev1, state offline 00:09:47.700 13:33:26 -- common/autotest_common.sh@950 -- # wait 53429 00:09:47.700 [2024-07-10 13:33:26.977237] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.960 13:33:27 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:47.960 00:09:47.960 real 0m14.630s 00:09:47.960 user 0m26.419s 00:09:47.960 sys 0m2.142s 00:09:47.960 13:33:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.960 13:33:27 -- common/autotest_common.sh@10 -- # set +x 00:09:47.960 ************************************ 00:09:47.960 END TEST raid_superblock_test 00:09:47.960 ************************************ 00:09:47.960 13:33:27 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:09:47.960 13:33:27 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:09:47.960 13:33:27 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:09:47.960 00:09:47.960 real 3m48.399s 00:09:47.960 user 6m32.181s 00:09:47.960 sys 0m43.323s 00:09:47.960 13:33:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.960 13:33:27 -- common/autotest_common.sh@10 -- # set +x 00:09:47.960 ************************************ 00:09:47.960 END TEST bdev_raid 00:09:47.960 ************************************ 00:09:47.960 13:33:27 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:09:47.960 13:33:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:47.960 13:33:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.960 13:33:27 -- common/autotest_common.sh@10 -- # set +x 00:09:47.960 ************************************ 00:09:47.960 START TEST bdevperf_config 00:09:47.960 ************************************ 00:09:47.960 13:33:27 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:09:48.220 * Looking for test storage... 00:09:48.220 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:09:48.220 13:33:27 -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:09:48.220 13:33:27 -- bdevperf/common.sh@8 -- # local job_section=global 00:09:48.220 13:33:27 -- bdevperf/common.sh@9 -- # local rw=read 00:09:48.220 13:33:27 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:09:48.220 13:33:27 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:09:48.220 13:33:27 -- bdevperf/common.sh@13 -- # cat 00:09:48.220 13:33:27 -- bdevperf/common.sh@18 -- # job='[global]' 00:09:48.220 00:09:48.220 13:33:27 -- bdevperf/common.sh@19 -- # echo 00:09:48.220 13:33:27 -- bdevperf/common.sh@20 -- # cat 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@18 -- # create_job job0 00:09:48.220 13:33:27 -- bdevperf/common.sh@8 -- # local job_section=job0 00:09:48.220 13:33:27 -- bdevperf/common.sh@9 -- # local rw= 00:09:48.220 13:33:27 -- bdevperf/common.sh@10 -- # local filename= 00:09:48.220 13:33:27 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:09:48.220 13:33:27 -- bdevperf/common.sh@18 -- # job='[job0]' 00:09:48.220 00:09:48.220 13:33:27 -- bdevperf/common.sh@19 -- # echo 00:09:48.220 13:33:27 -- bdevperf/common.sh@20 -- # cat 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@19 -- # create_job job1 00:09:48.220 13:33:27 -- bdevperf/common.sh@8 -- # local job_section=job1 00:09:48.220 13:33:27 -- bdevperf/common.sh@9 -- # local rw= 00:09:48.220 13:33:27 -- bdevperf/common.sh@10 -- # local filename= 00:09:48.220 13:33:27 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:09:48.220 13:33:27 -- bdevperf/common.sh@18 -- # job='[job1]' 00:09:48.220 00:09:48.220 13:33:27 -- bdevperf/common.sh@19 -- # echo 00:09:48.220 13:33:27 -- bdevperf/common.sh@20 -- # cat 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@20 -- # create_job job2 00:09:48.220 13:33:27 -- bdevperf/common.sh@8 -- # local job_section=job2 00:09:48.220 13:33:27 -- bdevperf/common.sh@9 -- # local rw= 00:09:48.220 13:33:27 -- bdevperf/common.sh@10 -- # local filename= 00:09:48.220 13:33:27 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:09:48.220 13:33:27 -- bdevperf/common.sh@18 -- # job='[job2]' 00:09:48.220 00:09:48.220 13:33:27 -- bdevperf/common.sh@19 -- # echo 00:09:48.220 13:33:27 -- bdevperf/common.sh@20 -- # cat 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@21 -- # create_job job3 00:09:48.220 13:33:27 -- bdevperf/common.sh@8 -- # local job_section=job3 00:09:48.220 13:33:27 -- bdevperf/common.sh@9 -- # local rw= 00:09:48.220 13:33:27 -- bdevperf/common.sh@10 -- # local filename= 00:09:48.220 13:33:27 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:09:48.220 13:33:27 -- bdevperf/common.sh@18 -- # job='[job3]' 00:09:48.220 00:09:48.220 13:33:27 -- bdevperf/common.sh@19 -- # echo 00:09:48.220 13:33:27 -- bdevperf/common.sh@20 -- # cat 00:09:48.220 13:33:27 -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:50.759 13:33:30 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-10 13:33:27.426955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:50.759 [2024-07-10 13:33:27.427304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:50.759 Using job config with 4 jobs 00:09:50.759 EAL: TSC is not safe to use in SMP mode 00:09:50.759 EAL: TSC is not invariant 00:09:50.759 [2024-07-10 13:33:27.853310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.759 [2024-07-10 13:33:27.939961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.759 cpumask for '\''job0'\'' is too big 00:09:50.759 cpumask for '\''job1'\'' is too big 00:09:50.759 cpumask for '\''job2'\'' is too big 00:09:50.759 cpumask for '\''job3'\'' is too big 00:09:50.759 Running I/O for 2 seconds... 00:09:50.759 00:09:50.759 Latency(us) 00:09:50.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417961.79 408.17 0.00 0.00 612.32 166.90 1249.54 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417947.59 408.15 0.00 0.00 612.22 152.62 1078.17 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417993.22 408.20 0.00 0.00 612.05 157.08 896.10 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417973.59 408.18 0.00 0.00 611.97 152.62 756.86 00:09:50.759 =================================================================================================================== 00:09:50.759 Total : 1671876.20 1632.69 0.00 0.00 612.14 152.62 1249.54' 00:09:50.759 13:33:30 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-10 13:33:27.426955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:50.759 [2024-07-10 13:33:27.427304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:50.759 Using job config with 4 jobs 00:09:50.759 EAL: TSC is not safe to use in SMP mode 00:09:50.759 EAL: TSC is not invariant 00:09:50.759 [2024-07-10 13:33:27.853310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.759 [2024-07-10 13:33:27.939961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.759 cpumask for '\''job0'\'' is too big 00:09:50.759 cpumask for '\''job1'\'' is too big 00:09:50.759 cpumask for '\''job2'\'' is too big 00:09:50.759 cpumask for '\''job3'\'' is too big 00:09:50.759 Running I/O for 2 seconds... 00:09:50.759 00:09:50.759 Latency(us) 00:09:50.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417961.79 408.17 0.00 0.00 612.32 166.90 1249.54 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417947.59 408.15 0.00 0.00 612.22 152.62 1078.17 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417993.22 408.20 0.00 0.00 612.05 157.08 896.10 00:09:50.759 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:50.759 Malloc0 : 2.00 417973.59 408.18 0.00 0.00 611.97 152.62 756.86 00:09:50.759 =================================================================================================================== 00:09:50.759 Total : 1671876.20 1632.69 0.00 0.00 612.14 152.62 1249.54' 00:09:51.021 13:33:30 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:09:51.021 13:33:30 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:33:27.426955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:51.021 [2024-07-10 13:33:27.427304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:51.021 Using job config with 4 jobs 00:09:51.021 EAL: TSC is not safe to use in SMP mode 00:09:51.021 EAL: TSC is not invariant 00:09:51.021 [2024-07-10 13:33:27.853310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.021 [2024-07-10 13:33:27.939961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.021 cpumask for '\''job0'\'' is too big 00:09:51.021 cpumask for '\''job1'\'' is too big 00:09:51.021 cpumask for '\''job2'\'' is too big 00:09:51.021 cpumask for '\''job3'\'' is too big 00:09:51.021 Running I/O for 2 seconds... 00:09:51.021 00:09:51.021 Latency(us) 00:09:51.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.021 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:51.021 Malloc0 : 2.00 417961.79 408.17 0.00 0.00 612.32 166.90 1249.54 00:09:51.021 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:51.021 Malloc0 : 2.00 417947.59 408.15 0.00 0.00 612.22 152.62 1078.17 00:09:51.021 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:51.021 Malloc0 : 2.00 417993.22 408.20 0.00 0.00 612.05 157.08 896.10 00:09:51.021 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:51.021 Malloc0 : 2.00 417973.59 408.18 0.00 0.00 611.97 152.62 756.86 00:09:51.021 =================================================================================================================== 00:09:51.021 Total : 1671876.20 1632.69 0.00 0.00 612.14 152.62 1249.54' 00:09:51.021 13:33:30 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:09:51.021 13:33:30 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:09:51.021 13:33:30 -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:51.021 [2024-07-10 13:33:30.146016] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:51.021 [2024-07-10 13:33:30.146333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:51.283 EAL: TSC is not safe to use in SMP mode 00:09:51.283 EAL: TSC is not invariant 00:09:51.283 [2024-07-10 13:33:30.577050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.543 [2024-07-10 13:33:30.666872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.543 cpumask for 'job0' is too big 00:09:51.543 cpumask for 'job1' is too big 00:09:51.543 cpumask for 'job2' is too big 00:09:51.543 cpumask for 'job3' is too big 00:09:54.077 13:33:32 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:09:54.077 Running I/O for 2 seconds... 00:09:54.077 00:09:54.077 Latency(us) 00:09:54.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.077 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:54.077 Malloc0 : 2.00 411151.07 401.51 0.00 0.00 622.45 161.55 1235.26 00:09:54.077 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:54.078 Malloc0 : 2.00 411137.30 401.50 0.00 0.00 622.38 150.84 1071.03 00:09:54.078 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:54.078 Malloc0 : 2.00 411124.77 401.49 0.00 0.00 622.30 163.33 888.96 00:09:54.078 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:09:54.078 Malloc0 : 2.00 411193.72 401.56 0.00 0.00 622.08 67.83 753.29 00:09:54.078 =================================================================================================================== 00:09:54.078 Total : 1644606.85 1606.06 0.00 0.00 622.30 67.83 1235.26' 00:09:54.078 13:33:32 -- bdevperf/test_config.sh@27 -- # cleanup 00:09:54.078 13:33:32 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:54.078 13:33:32 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@8 -- # local job_section=job0 00:09:54.078 13:33:32 -- bdevperf/common.sh@9 -- # local rw=write 00:09:54.078 13:33:32 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:09:54.078 13:33:32 -- bdevperf/common.sh@18 -- # job='[job0]' 00:09:54.078 00:09:54.078 13:33:32 -- bdevperf/common.sh@19 -- # echo 00:09:54.078 13:33:32 -- bdevperf/common.sh@20 -- # cat 00:09:54.078 13:33:32 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@8 -- # local job_section=job1 00:09:54.078 13:33:32 -- bdevperf/common.sh@9 -- # local rw=write 00:09:54.078 13:33:32 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:09:54.078 13:33:32 -- bdevperf/common.sh@18 -- # job='[job1]' 00:09:54.078 00:09:54.078 13:33:32 -- bdevperf/common.sh@19 -- # echo 00:09:54.078 13:33:32 -- bdevperf/common.sh@20 -- # cat 00:09:54.078 13:33:32 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@8 -- # local job_section=job2 00:09:54.078 13:33:32 -- bdevperf/common.sh@9 -- # local rw=write 00:09:54.078 13:33:32 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:09:54.078 13:33:32 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:09:54.078 13:33:32 -- bdevperf/common.sh@18 -- # job='[job2]' 00:09:54.078 00:09:54.078 13:33:32 -- bdevperf/common.sh@19 -- # echo 00:09:54.078 13:33:32 -- bdevperf/common.sh@20 -- # cat 00:09:54.078 13:33:32 -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:56.610 13:33:35 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-10 13:33:32.878863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:56.610 [2024-07-10 13:33:32.879259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:56.610 Using job config with 3 jobs 00:09:56.611 EAL: TSC is not safe to use in SMP mode 00:09:56.611 EAL: TSC is not invariant 00:09:56.611 [2024-07-10 13:33:33.313960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.611 [2024-07-10 13:33:33.399772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.611 cpumask for '\''job0'\'' is too big 00:09:56.611 cpumask for '\''job1'\'' is too big 00:09:56.611 cpumask for '\''job2'\'' is too big 00:09:56.611 Running I/O for 2 seconds... 00:09:56.611 00:09:56.611 Latency(us) 00:09:56.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515055.78 502.98 0.00 0.00 496.85 192.79 903.24 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515077.93 503.01 0.00 0.00 496.73 153.51 756.86 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515062.74 502.99 0.00 0.00 496.67 112.01 639.05 00:09:56.611 =================================================================================================================== 00:09:56.611 Total : 1545196.46 1508.98 0.00 0.00 496.75 112.01 903.24' 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-10 13:33:32.878863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:56.611 [2024-07-10 13:33:32.879259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:56.611 Using job config with 3 jobs 00:09:56.611 EAL: TSC is not safe to use in SMP mode 00:09:56.611 EAL: TSC is not invariant 00:09:56.611 [2024-07-10 13:33:33.313960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.611 [2024-07-10 13:33:33.399772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.611 cpumask for '\''job0'\'' is too big 00:09:56.611 cpumask for '\''job1'\'' is too big 00:09:56.611 cpumask for '\''job2'\'' is too big 00:09:56.611 Running I/O for 2 seconds... 00:09:56.611 00:09:56.611 Latency(us) 00:09:56.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515055.78 502.98 0.00 0.00 496.85 192.79 903.24 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515077.93 503.01 0.00 0.00 496.73 153.51 756.86 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515062.74 502.99 0.00 0.00 496.67 112.01 639.05 00:09:56.611 =================================================================================================================== 00:09:56.611 Total : 1545196.46 1508.98 0.00 0.00 496.75 112.01 903.24' 00:09:56.611 13:33:35 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:33:32.878863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:56.611 [2024-07-10 13:33:32.879259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:56.611 Using job config with 3 jobs 00:09:56.611 EAL: TSC is not safe to use in SMP mode 00:09:56.611 EAL: TSC is not invariant 00:09:56.611 [2024-07-10 13:33:33.313960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.611 [2024-07-10 13:33:33.399772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.611 cpumask for '\''job0'\'' is too big 00:09:56.611 cpumask for '\''job1'\'' is too big 00:09:56.611 cpumask for '\''job2'\'' is too big 00:09:56.611 Running I/O for 2 seconds... 00:09:56.611 00:09:56.611 Latency(us) 00:09:56.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515055.78 502.98 0.00 0.00 496.85 192.79 903.24 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515077.93 503.01 0.00 0.00 496.73 153.51 756.86 00:09:56.611 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:09:56.611 Malloc0 : 2.00 515062.74 502.99 0.00 0.00 496.67 112.01 639.05 00:09:56.611 =================================================================================================================== 00:09:56.611 Total : 1545196.46 1508.98 0.00 0.00 496.75 112.01 903.24' 00:09:56.611 13:33:35 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:09:56.611 13:33:35 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@35 -- # cleanup 00:09:56.611 13:33:35 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:09:56.611 13:33:35 -- bdevperf/common.sh@8 -- # local job_section=global 00:09:56.611 13:33:35 -- bdevperf/common.sh@9 -- # local rw=rw 00:09:56.611 13:33:35 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:09:56.611 13:33:35 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:09:56.611 13:33:35 -- bdevperf/common.sh@13 -- # cat 00:09:56.611 13:33:35 -- bdevperf/common.sh@18 -- # job='[global]' 00:09:56.611 00:09:56.611 13:33:35 -- bdevperf/common.sh@19 -- # echo 00:09:56.611 13:33:35 -- bdevperf/common.sh@20 -- # cat 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@38 -- # create_job job0 00:09:56.611 13:33:35 -- bdevperf/common.sh@8 -- # local job_section=job0 00:09:56.611 13:33:35 -- bdevperf/common.sh@9 -- # local rw= 00:09:56.611 13:33:35 -- bdevperf/common.sh@10 -- # local filename= 00:09:56.611 13:33:35 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:09:56.611 13:33:35 -- bdevperf/common.sh@18 -- # job='[job0]' 00:09:56.611 00:09:56.611 13:33:35 -- bdevperf/common.sh@19 -- # echo 00:09:56.611 13:33:35 -- bdevperf/common.sh@20 -- # cat 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@39 -- # create_job job1 00:09:56.611 13:33:35 -- bdevperf/common.sh@8 -- # local job_section=job1 00:09:56.611 13:33:35 -- bdevperf/common.sh@9 -- # local rw= 00:09:56.611 13:33:35 -- bdevperf/common.sh@10 -- # local filename= 00:09:56.611 13:33:35 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:09:56.611 13:33:35 -- bdevperf/common.sh@18 -- # job='[job1]' 00:09:56.611 00:09:56.611 13:33:35 -- bdevperf/common.sh@19 -- # echo 00:09:56.611 13:33:35 -- bdevperf/common.sh@20 -- # cat 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@40 -- # create_job job2 00:09:56.611 13:33:35 -- bdevperf/common.sh@8 -- # local job_section=job2 00:09:56.611 13:33:35 -- bdevperf/common.sh@9 -- # local rw= 00:09:56.611 13:33:35 -- bdevperf/common.sh@10 -- # local filename= 00:09:56.611 13:33:35 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:09:56.611 13:33:35 -- bdevperf/common.sh@18 -- # job='[job2]' 00:09:56.611 00:09:56.611 13:33:35 -- bdevperf/common.sh@19 -- # echo 00:09:56.611 13:33:35 -- bdevperf/common.sh@20 -- # cat 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@41 -- # create_job job3 00:09:56.611 13:33:35 -- bdevperf/common.sh@8 -- # local job_section=job3 00:09:56.611 13:33:35 -- bdevperf/common.sh@9 -- # local rw= 00:09:56.611 13:33:35 -- bdevperf/common.sh@10 -- # local filename= 00:09:56.611 13:33:35 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:09:56.611 13:33:35 -- bdevperf/common.sh@18 -- # job='[job3]' 00:09:56.611 00:09:56.611 13:33:35 -- bdevperf/common.sh@19 -- # echo 00:09:56.611 13:33:35 -- bdevperf/common.sh@20 -- # cat 00:09:56.611 13:33:35 -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:59.139 13:33:38 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-10 13:33:35.626350] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:59.139 [2024-07-10 13:33:35.626581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:59.139 Using job config with 4 jobs 00:09:59.139 EAL: TSC is not safe to use in SMP mode 00:09:59.139 EAL: TSC is not invariant 00:09:59.139 [2024-07-10 13:33:36.085900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.139 [2024-07-10 13:33:36.174522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.139 cpumask for '\''job0'\'' is too big 00:09:59.139 cpumask for '\''job1'\'' is too big 00:09:59.139 cpumask for '\''job2'\'' is too big 00:09:59.139 cpumask for '\''job3'\'' is too big 00:09:59.139 Running I/O for 2 seconds... 00:09:59.139 00:09:59.139 Latency(us) 00:09:59.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185074.70 180.74 0.00 0.00 1382.98 408.78 2856.08 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185080.91 180.74 0.00 0.00 1382.74 424.84 2841.80 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185073.79 180.74 0.00 0.00 1382.42 405.21 2427.67 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185066.95 180.73 0.00 0.00 1382.28 390.93 2413.39 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185058.36 180.72 0.00 0.00 1381.89 399.85 2013.54 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185046.54 180.71 0.00 0.00 1381.88 385.57 1999.26 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185038.59 180.70 0.00 0.00 1381.53 389.14 1627.97 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185138.57 180.80 0.00 0.00 1380.72 74.53 1642.25 00:09:59.139 =================================================================================================================== 00:09:59.139 Total : 1480578.41 1445.88 0.00 0.00 1382.05 74.53 2856.08' 00:09:59.139 13:33:38 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-10 13:33:35.626350] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:59.139 [2024-07-10 13:33:35.626581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:59.139 Using job config with 4 jobs 00:09:59.139 EAL: TSC is not safe to use in SMP mode 00:09:59.139 EAL: TSC is not invariant 00:09:59.139 [2024-07-10 13:33:36.085900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.139 [2024-07-10 13:33:36.174522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.139 cpumask for '\''job0'\'' is too big 00:09:59.139 cpumask for '\''job1'\'' is too big 00:09:59.139 cpumask for '\''job2'\'' is too big 00:09:59.139 cpumask for '\''job3'\'' is too big 00:09:59.139 Running I/O for 2 seconds... 00:09:59.139 00:09:59.139 Latency(us) 00:09:59.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185074.70 180.74 0.00 0.00 1382.98 408.78 2856.08 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185080.91 180.74 0.00 0.00 1382.74 424.84 2841.80 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185073.79 180.74 0.00 0.00 1382.42 405.21 2427.67 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185066.95 180.73 0.00 0.00 1382.28 390.93 2413.39 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185058.36 180.72 0.00 0.00 1381.89 399.85 2013.54 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185046.54 180.71 0.00 0.00 1381.88 385.57 1999.26 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185038.59 180.70 0.00 0.00 1381.53 389.14 1627.97 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185138.57 180.80 0.00 0.00 1380.72 74.53 1642.25 00:09:59.139 =================================================================================================================== 00:09:59.139 Total : 1480578.41 1445.88 0.00 0.00 1382.05 74.53 2856.08' 00:09:59.139 13:33:38 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:33:35.626350] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:59.139 [2024-07-10 13:33:35.626581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:59.139 Using job config with 4 jobs 00:09:59.139 EAL: TSC is not safe to use in SMP mode 00:09:59.139 EAL: TSC is not invariant 00:09:59.139 [2024-07-10 13:33:36.085900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.139 [2024-07-10 13:33:36.174522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.139 cpumask for '\''job0'\'' is too big 00:09:59.139 cpumask for '\''job1'\'' is too big 00:09:59.139 cpumask for '\''job2'\'' is too big 00:09:59.139 cpumask for '\''job3'\'' is too big 00:09:59.139 Running I/O for 2 seconds... 00:09:59.139 00:09:59.139 Latency(us) 00:09:59.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185074.70 180.74 0.00 0.00 1382.98 408.78 2856.08 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185080.91 180.74 0.00 0.00 1382.74 424.84 2841.80 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185073.79 180.74 0.00 0.00 1382.42 405.21 2427.67 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185066.95 180.73 0.00 0.00 1382.28 390.93 2413.39 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185058.36 180.72 0.00 0.00 1381.89 399.85 2013.54 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185046.54 180.71 0.00 0.00 1381.88 385.57 1999.26 00:09:59.139 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc0 : 2.00 185038.59 180.70 0.00 0.00 1381.53 389.14 1627.97 00:09:59.139 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:09:59.139 Malloc1 : 2.00 185138.57 180.80 0.00 0.00 1380.72 74.53 1642.25 00:09:59.139 =================================================================================================================== 00:09:59.139 Total : 1480578.41 1445.88 0.00 0.00 1382.05 74.53 2856.08' 00:09:59.139 13:33:38 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:09:59.139 13:33:38 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:09:59.139 13:33:38 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:09:59.139 13:33:38 -- bdevperf/test_config.sh@44 -- # cleanup 00:09:59.139 13:33:38 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:09:59.140 13:33:38 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:59.140 00:09:59.140 real 0m11.163s 00:09:59.140 user 0m8.987s 00:09:59.140 sys 0m2.224s 00:09:59.140 13:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.140 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:59.140 ************************************ 00:09:59.140 END TEST bdevperf_config 00:09:59.140 ************************************ 00:09:59.140 13:33:38 -- spdk/autotest.sh@198 -- # uname -s 00:09:59.140 13:33:38 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:09:59.140 13:33:38 -- spdk/autotest.sh@204 -- # uname -s 00:09:59.140 13:33:38 -- spdk/autotest.sh@204 -- # [[ FreeBSD == Linux ]] 00:09:59.140 13:33:38 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:09:59.140 13:33:38 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:59.140 13:33:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:59.140 13:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.140 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:59.140 ************************************ 00:09:59.140 START TEST blockdev_nvme 00:09:59.140 ************************************ 00:09:59.140 13:33:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:59.397 * Looking for test storage... 00:09:59.397 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:09:59.397 13:33:38 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:59.397 13:33:38 -- bdev/nbd_common.sh@6 -- # set -e 00:09:59.397 13:33:38 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:59.397 13:33:38 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:59.397 13:33:38 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:59.397 13:33:38 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:59.397 13:33:38 -- bdev/blockdev.sh@18 -- # : 00:09:59.397 13:33:38 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:59.397 13:33:38 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:59.397 13:33:38 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:59.397 13:33:38 -- bdev/blockdev.sh@672 -- # uname -s 00:09:59.397 13:33:38 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:09:59.397 13:33:38 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:09:59.397 13:33:38 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:09:59.397 13:33:38 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:59.397 13:33:38 -- bdev/blockdev.sh@682 -- # dek= 00:09:59.397 13:33:38 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:59.397 13:33:38 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:59.397 13:33:38 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:59.397 13:33:38 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:09:59.397 13:33:38 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:09:59.397 13:33:38 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:59.397 13:33:38 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:59.397 13:33:38 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=53979 00:09:59.397 13:33:38 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:59.397 13:33:38 -- bdev/blockdev.sh@47 -- # waitforlisten 53979 00:09:59.397 13:33:38 -- common/autotest_common.sh@819 -- # '[' -z 53979 ']' 00:09:59.397 13:33:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.397 13:33:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.397 13:33:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.397 13:33:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:59.397 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:59.397 [2024-07-10 13:33:38.634682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:59.397 [2024-07-10 13:33:38.634854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:59.960 EAL: TSC is not safe to use in SMP mode 00:09:59.960 EAL: TSC is not invariant 00:09:59.961 [2024-07-10 13:33:39.083747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.961 [2024-07-10 13:33:39.169769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:59.961 [2024-07-10 13:33:39.169851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.218 13:33:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.218 13:33:39 -- common/autotest_common.sh@852 -- # return 0 00:10:00.218 13:33:39 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:00.218 13:33:39 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:10:00.218 13:33:39 -- bdev/blockdev.sh@79 -- # local json 00:10:00.218 13:33:39 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:00.218 13:33:39 -- bdev/blockdev.sh@80 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:00.477 13:33:39 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 [2024-07-10 13:33:39.626965] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@738 -- # cat 00:10:00.477 13:33:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:00.477 13:33:39 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:00.477 13:33:39 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:00.477 13:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:00.477 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:00.477 13:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:00.477 13:33:39 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:00.477 13:33:39 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:00.477 13:33:39 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "047decc8-3ec1-11ef-b9c4-5b09e08d4792"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "047decc8-3ec1-11ef-b9c4-5b09e08d4792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:00.477 13:33:39 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:00.477 13:33:39 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:10:00.477 13:33:39 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:00.477 13:33:39 -- bdev/blockdev.sh@752 -- # killprocess 53979 00:10:00.477 13:33:39 -- common/autotest_common.sh@926 -- # '[' -z 53979 ']' 00:10:00.477 13:33:39 -- common/autotest_common.sh@930 -- # kill -0 53979 00:10:00.477 13:33:39 -- common/autotest_common.sh@931 -- # uname 00:10:00.477 13:33:39 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:00.477 13:33:39 -- common/autotest_common.sh@934 -- # ps -c -o command 53979 00:10:00.477 13:33:39 -- common/autotest_common.sh@934 -- # tail -1 00:10:00.477 13:33:39 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:10:00.477 13:33:39 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:10:00.477 killing process with pid 53979 00:10:00.477 13:33:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53979' 00:10:00.477 13:33:39 -- common/autotest_common.sh@945 -- # kill 53979 00:10:00.477 13:33:39 -- common/autotest_common.sh@950 -- # wait 53979 00:10:00.736 13:33:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:00.736 13:33:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:00.736 13:33:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:00.736 13:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.736 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:00.736 ************************************ 00:10:00.736 START TEST bdev_hello_world 00:10:00.736 ************************************ 00:10:00.736 13:33:40 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:00.736 [2024-07-10 13:33:40.041443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:00.736 [2024-07-10 13:33:40.041826] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:01.303 EAL: TSC is not safe to use in SMP mode 00:10:01.303 EAL: TSC is not invariant 00:10:01.303 [2024-07-10 13:33:40.474560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.303 [2024-07-10 13:33:40.562776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.303 [2024-07-10 13:33:40.618122] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:01.561 [2024-07-10 13:33:40.688319] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:01.561 [2024-07-10 13:33:40.688353] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:01.561 [2024-07-10 13:33:40.688362] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:01.561 [2024-07-10 13:33:40.688872] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:01.561 [2024-07-10 13:33:40.689220] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:01.561 [2024-07-10 13:33:40.689242] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:01.561 [2024-07-10 13:33:40.689360] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:01.561 00:10:01.561 [2024-07-10 13:33:40.689389] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:01.561 00:10:01.561 real 0m0.804s 00:10:01.561 user 0m0.323s 00:10:01.561 sys 0m0.481s 00:10:01.561 13:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.561 ************************************ 00:10:01.561 END TEST bdev_hello_world 00:10:01.561 ************************************ 00:10:01.561 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:01.561 13:33:40 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:01.561 13:33:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:01.561 13:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.561 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:01.561 ************************************ 00:10:01.561 START TEST bdev_bounds 00:10:01.561 ************************************ 00:10:01.561 13:33:40 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:01.561 13:33:40 -- bdev/blockdev.sh@288 -- # bdevio_pid=54038 00:10:01.561 13:33:40 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:01.561 13:33:40 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:01.561 Process bdevio pid: 54038 00:10:01.561 13:33:40 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 54038' 00:10:01.561 13:33:40 -- bdev/blockdev.sh@291 -- # waitforlisten 54038 00:10:01.561 13:33:40 -- common/autotest_common.sh@819 -- # '[' -z 54038 ']' 00:10:01.561 13:33:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.561 13:33:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.561 13:33:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.561 13:33:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:01.561 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:01.561 [2024-07-10 13:33:40.897175] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:01.561 [2024-07-10 13:33:40.897440] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:02.127 EAL: TSC is not safe to use in SMP mode 00:10:02.127 EAL: TSC is not invariant 00:10:02.127 [2024-07-10 13:33:41.324248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.127 [2024-07-10 13:33:41.414849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.127 [2024-07-10 13:33:41.414705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.127 [2024-07-10 13:33:41.414852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.127 [2024-07-10 13:33:41.469444] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:02.694 13:33:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:02.694 13:33:41 -- common/autotest_common.sh@852 -- # return 0 00:10:02.694 13:33:41 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:02.694 I/O targets: 00:10:02.694 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:02.694 00:10:02.694 00:10:02.694 CUnit - A unit testing framework for C - Version 2.1-3 00:10:02.694 http://cunit.sourceforge.net/ 00:10:02.694 00:10:02.694 00:10:02.694 Suite: bdevio tests on: Nvme0n1 00:10:02.694 Test: blockdev write read block ...passed 00:10:02.694 Test: blockdev write zeroes read block ...passed 00:10:02.694 Test: blockdev write zeroes read no split ...passed 00:10:02.694 Test: blockdev write zeroes read split ...passed 00:10:02.694 Test: blockdev write zeroes read split partial ...passed 00:10:02.694 Test: blockdev reset ...[2024-07-10 13:33:41.880268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:02.694 passed 00:10:02.694 Test: blockdev write read 8 blocks ...[2024-07-10 13:33:41.881312] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:02.694 passed 00:10:02.694 Test: blockdev write read size > 128k ...passed 00:10:02.694 Test: blockdev write read invalid size ...passed 00:10:02.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:02.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:02.694 Test: blockdev write read max offset ...passed 00:10:02.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:02.694 Test: blockdev writev readv 8 blocks ...passed 00:10:02.694 Test: blockdev writev readv 30 x 1block ...passed 00:10:02.694 Test: blockdev writev readv block ...passed 00:10:02.694 Test: blockdev writev readv size > 128k ...passed 00:10:02.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:02.694 Test: blockdev comparev and writev ...[2024-07-10 13:33:41.886195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x277947000 len:0x1000 00:10:02.694 [2024-07-10 13:33:41.886234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:02.694 passed 00:10:02.694 Test: blockdev nvme passthru rw ...passed 00:10:02.694 Test: blockdev nvme passthru vendor specific ...[2024-07-10 13:33:41.886615] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:02.694 passed 00:10:02.694 Test: blockdev nvme admin passthru ...[2024-07-10 13:33:41.886632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:02.694 passed 00:10:02.694 Test: blockdev copy ...passed 00:10:02.694 00:10:02.694 Run Summary: Type Total Ran Passed Failed Inactive 00:10:02.694 suites 1 1 n/a 0 0 00:10:02.694 tests 23 23 23 0 0 00:10:02.694 asserts 152 152 152 0 n/a 00:10:02.694 00:10:02.694 Elapsed time = 0.039 seconds 00:10:02.694 0 00:10:02.694 13:33:41 -- bdev/blockdev.sh@293 -- # killprocess 54038 00:10:02.694 13:33:41 -- common/autotest_common.sh@926 -- # '[' -z 54038 ']' 00:10:02.694 13:33:41 -- common/autotest_common.sh@930 -- # kill -0 54038 00:10:02.694 13:33:41 -- common/autotest_common.sh@931 -- # uname 00:10:02.694 13:33:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:02.694 13:33:41 -- common/autotest_common.sh@934 -- # ps -c -o command 54038 00:10:02.694 13:33:41 -- common/autotest_common.sh@934 -- # tail -1 00:10:02.694 13:33:41 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:10:02.694 13:33:41 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:10:02.694 killing process with pid 54038 00:10:02.694 13:33:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54038' 00:10:02.694 13:33:41 -- common/autotest_common.sh@945 -- # kill 54038 00:10:02.694 13:33:41 -- common/autotest_common.sh@950 -- # wait 54038 00:10:02.953 13:33:42 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:02.953 00:10:02.953 real 0m1.187s 00:10:02.953 user 0m2.278s 00:10:02.953 sys 0m0.509s 00:10:02.953 13:33:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.953 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.953 ************************************ 00:10:02.953 END TEST bdev_bounds 00:10:02.953 ************************************ 00:10:02.953 13:33:42 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:02.953 13:33:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:02.953 13:33:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.953 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.953 ************************************ 00:10:02.953 START TEST bdev_nbd 00:10:02.953 ************************************ 00:10:02.953 13:33:42 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:02.953 13:33:42 -- bdev/blockdev.sh@298 -- # uname -s 00:10:02.953 13:33:42 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:10:02.953 13:33:42 -- bdev/blockdev.sh@298 -- # return 0 00:10:02.953 00:10:02.953 real 0m0.007s 00:10:02.953 user 0m0.008s 00:10:02.953 sys 0m0.000s 00:10:02.953 13:33:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.953 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.953 ************************************ 00:10:02.953 END TEST bdev_nbd 00:10:02.953 ************************************ 00:10:02.953 13:33:42 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:02.953 13:33:42 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:02.953 skipping fio tests on NVMe due to multi-ns failures. 00:10:02.953 13:33:42 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:02.953 13:33:42 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:02.953 13:33:42 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.953 13:33:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:02.953 13:33:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.954 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.954 ************************************ 00:10:02.954 START TEST bdev_verify 00:10:02.954 ************************************ 00:10:02.954 13:33:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.954 [2024-07-10 13:33:42.199282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:02.954 [2024-07-10 13:33:42.199634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:03.522 EAL: TSC is not safe to use in SMP mode 00:10:03.522 EAL: TSC is not invariant 00:10:03.522 [2024-07-10 13:33:42.630010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:03.522 [2024-07-10 13:33:42.720080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.522 [2024-07-10 13:33:42.720081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.522 [2024-07-10 13:33:42.775587] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:03.522 Running I/O for 5 seconds... 00:10:08.810 00:10:08.810 Latency(us) 00:10:08.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.810 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.810 Verification LBA range: start 0x0 length 0xa0000 00:10:08.810 Nvme0n1 : 5.00 35944.38 140.41 0.00 0.00 3553.96 208.85 9482.20 00:10:08.810 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.810 Verification LBA range: start 0xa0000 length 0xa0000 00:10:08.810 Nvme0n1 : 5.00 36880.14 144.06 0.00 0.00 3463.42 161.55 9710.68 00:10:08.810 =================================================================================================================== 00:10:08.810 Total : 72824.52 284.47 0.00 0.00 3508.10 161.55 9710.68 00:10:55.571 00:10:55.571 real 0m45.694s 00:10:55.571 user 1m30.231s 00:10:55.571 sys 0m0.468s 00:10:55.571 13:34:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.571 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:10:55.571 ************************************ 00:10:55.571 END TEST bdev_verify 00:10:55.571 ************************************ 00:10:55.571 13:34:27 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:55.571 13:34:27 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:55.571 13:34:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.571 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:10:55.571 ************************************ 00:10:55.571 START TEST bdev_verify_big_io 00:10:55.571 ************************************ 00:10:55.571 13:34:27 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:55.571 [2024-07-10 13:34:27.947639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:55.571 [2024-07-10 13:34:27.947995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:55.571 EAL: TSC is not safe to use in SMP mode 00:10:55.571 EAL: TSC is not invariant 00:10:55.571 [2024-07-10 13:34:28.374917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.571 [2024-07-10 13:34:28.462690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.571 [2024-07-10 13:34:28.462684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.571 [2024-07-10 13:34:28.518155] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:55.571 Running I/O for 5 seconds... 00:10:55.571 00:10:55.571 Latency(us) 00:10:55.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:55.571 Verification LBA range: start 0x0 length 0xa000 00:10:55.571 Nvme0n1 : 5.01 14429.65 901.85 0.00 0.00 8820.68 617.63 30388.73 00:10:55.571 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:55.571 Verification LBA range: start 0xa000 length 0xa000 00:10:55.571 Nvme0n1 : 5.01 14452.36 903.27 0.00 0.00 8806.80 389.14 32445.11 00:10:55.571 =================================================================================================================== 00:10:55.571 Total : 28882.01 1805.13 0.00 0.00 8813.73 389.14 32445.11 00:10:59.769 00:10:59.769 real 0m10.958s 00:10:59.769 user 0m20.823s 00:10:59.769 sys 0m0.457s 00:10:59.769 13:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.769 13:34:38 -- common/autotest_common.sh@10 -- # set +x 00:10:59.769 ************************************ 00:10:59.769 END TEST bdev_verify_big_io 00:10:59.769 ************************************ 00:10:59.769 13:34:38 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.769 13:34:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:59.769 13:34:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.769 13:34:38 -- common/autotest_common.sh@10 -- # set +x 00:10:59.769 ************************************ 00:10:59.769 START TEST bdev_write_zeroes 00:10:59.769 ************************************ 00:10:59.769 13:34:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.769 [2024-07-10 13:34:38.958594] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:59.769 [2024-07-10 13:34:38.958966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:00.340 EAL: TSC is not safe to use in SMP mode 00:11:00.340 EAL: TSC is not invariant 00:11:00.340 [2024-07-10 13:34:39.489498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.340 [2024-07-10 13:34:39.576241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.340 [2024-07-10 13:34:39.631731] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:00.340 Running I/O for 1 seconds... 00:11:01.716 00:11:01.716 Latency(us) 00:11:01.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.716 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.716 Nvme0n1 : 1.00 59676.35 233.11 0.00 0.00 2142.11 524.81 31074.19 00:11:01.716 =================================================================================================================== 00:11:01.716 Total : 59676.35 233.11 0.00 0.00 2142.11 524.81 31074.19 00:11:01.716 00:11:01.716 real 0m1.910s 00:11:01.716 user 0m1.349s 00:11:01.716 sys 0m0.559s 00:11:01.716 13:34:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.716 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 ************************************ 00:11:01.716 END TEST bdev_write_zeroes 00:11:01.716 ************************************ 00:11:01.716 13:34:40 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.716 13:34:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:01.716 13:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.716 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 ************************************ 00:11:01.716 START TEST bdev_json_nonenclosed 00:11:01.716 ************************************ 00:11:01.716 13:34:40 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.716 [2024-07-10 13:34:40.922174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:01.716 [2024-07-10 13:34:40.922542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:02.055 EAL: TSC is not safe to use in SMP mode 00:11:02.055 EAL: TSC is not invariant 00:11:02.055 [2024-07-10 13:34:41.357490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.313 [2024-07-10 13:34:41.450823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.313 [2024-07-10 13:34:41.450959] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:02.313 [2024-07-10 13:34:41.450970] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.313 00:11:02.313 real 0m0.637s 00:11:02.313 user 0m0.162s 00:11:02.313 sys 0m0.472s 00:11:02.313 13:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.313 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:11:02.313 ************************************ 00:11:02.313 END TEST bdev_json_nonenclosed 00:11:02.313 ************************************ 00:11:02.313 13:34:41 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.313 13:34:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:02.313 13:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.313 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:11:02.313 ************************************ 00:11:02.313 START TEST bdev_json_nonarray 00:11:02.313 ************************************ 00:11:02.313 13:34:41 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.313 [2024-07-10 13:34:41.611392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:02.313 [2024-07-10 13:34:41.611737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:02.879 EAL: TSC is not safe to use in SMP mode 00:11:02.879 EAL: TSC is not invariant 00:11:02.879 [2024-07-10 13:34:42.062142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.879 [2024-07-10 13:34:42.151381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.879 [2024-07-10 13:34:42.151492] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:02.879 [2024-07-10 13:34:42.151502] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.879 00:11:02.879 real 0m0.650s 00:11:02.879 user 0m0.143s 00:11:02.879 sys 0m0.504s 00:11:02.879 13:34:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.879 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.879 ************************************ 00:11:02.879 END TEST bdev_json_nonarray 00:11:02.879 ************************************ 00:11:03.138 13:34:42 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:03.138 13:34:42 -- bdev/blockdev.sh@809 -- # cleanup 00:11:03.138 13:34:42 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:03.138 13:34:42 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:03.138 13:34:42 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:11:03.138 13:34:42 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:11:03.138 00:11:03.138 real 1m3.867s 00:11:03.138 user 1m56.902s 00:11:03.138 sys 0m4.485s 00:11:03.138 13:34:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.138 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:03.138 ************************************ 00:11:03.138 END TEST blockdev_nvme 00:11:03.138 ************************************ 00:11:03.138 13:34:42 -- spdk/autotest.sh@219 -- # uname -s 00:11:03.138 13:34:42 -- spdk/autotest.sh@219 -- # [[ FreeBSD == Linux ]] 00:11:03.138 13:34:42 -- spdk/autotest.sh@222 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:03.138 13:34:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:03.138 13:34:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.138 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:03.138 ************************************ 00:11:03.138 START TEST nvme 00:11:03.138 ************************************ 00:11:03.138 13:34:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:03.398 * Looking for test storage... 00:11:03.398 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:03.398 13:34:42 -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:03.398 hw.nic_uio.bdfs="0:6:0" 00:11:03.398 13:34:42 -- nvme/nvme.sh@79 -- # uname 00:11:03.398 13:34:42 -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:11:03.398 13:34:42 -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:03.398 13:34:42 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:03.398 13:34:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.398 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:03.398 ************************************ 00:11:03.398 START TEST nvme_reset 00:11:03.398 ************************************ 00:11:03.398 13:34:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:03.964 EAL: TSC is not safe to use in SMP mode 00:11:03.964 EAL: TSC is not invariant 00:11:03.964 [2024-07-10 13:34:43.210492] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:03.964 Initializing NVMe Controllers 00:11:03.964 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:03.965 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:03.965 00:11:03.965 real 0m0.514s 00:11:03.965 user 0m0.017s 00:11:03.965 sys 0m0.497s 00:11:03.965 13:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.965 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:11:03.965 ************************************ 00:11:03.965 END TEST nvme_reset 00:11:03.965 ************************************ 00:11:03.965 13:34:43 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:03.965 13:34:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:03.965 13:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.965 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:11:03.965 ************************************ 00:11:03.965 START TEST nvme_identify 00:11:03.965 ************************************ 00:11:03.965 13:34:43 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:03.965 13:34:43 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:03.965 13:34:43 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:03.965 13:34:43 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:03.965 13:34:43 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:03.965 13:34:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:03.965 13:34:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:03.965 13:34:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:03.965 13:34:43 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:03.965 13:34:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:04.223 13:34:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:04.223 13:34:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:04.224 13:34:43 -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:04.484 EAL: TSC is not safe to use in SMP mode 00:11:04.484 EAL: TSC is not invariant 00:11:04.484 [2024-07-10 13:34:43.834399] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:04.484 ===================================================== 00:11:04.484 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:04.484 ===================================================== 00:11:04.484 Controller Capabilities/Features 00:11:04.484 ================================ 00:11:04.484 Vendor ID: 1b36 00:11:04.484 Subsystem Vendor ID: 1af4 00:11:04.484 Serial Number: 12340 00:11:04.484 Model Number: QEMU NVMe Ctrl 00:11:04.484 Firmware Version: 8.0.0 00:11:04.484 Recommended Arb Burst: 6 00:11:04.484 IEEE OUI Identifier: 00 54 52 00:11:04.484 Multi-path I/O 00:11:04.484 May have multiple subsystem ports: No 00:11:04.484 May have multiple controllers: No 00:11:04.484 Associated with SR-IOV VF: No 00:11:04.484 Max Data Transfer Size: 524288 00:11:04.484 Max Number of Namespaces: 256 00:11:04.484 Max Number of I/O Queues: 64 00:11:04.484 NVMe Specification Version (VS): 1.4 00:11:04.484 NVMe Specification Version (Identify): 1.4 00:11:04.484 Maximum Queue Entries: 2048 00:11:04.484 Contiguous Queues Required: Yes 00:11:04.484 Arbitration Mechanisms Supported 00:11:04.484 Weighted Round Robin: Not Supported 00:11:04.484 Vendor Specific: Not Supported 00:11:04.484 Reset Timeout: 7500 ms 00:11:04.484 Doorbell Stride: 4 bytes 00:11:04.484 NVM Subsystem Reset: Not Supported 00:11:04.484 Command Sets Supported 00:11:04.484 NVM Command Set: Supported 00:11:04.484 Boot Partition: Not Supported 00:11:04.484 Memory Page Size Minimum: 4096 bytes 00:11:04.484 Memory Page Size Maximum: 65536 bytes 00:11:04.484 Persistent Memory Region: Not Supported 00:11:04.484 Optional Asynchronous Events Supported 00:11:04.484 Namespace Attribute Notices: Supported 00:11:04.484 Firmware Activation Notices: Not Supported 00:11:04.484 ANA Change Notices: Not Supported 00:11:04.484 PLE Aggregate Log Change Notices: Not Supported 00:11:04.484 LBA Status Info Alert Notices: Not Supported 00:11:04.484 EGE Aggregate Log Change Notices: Not Supported 00:11:04.484 Normal NVM Subsystem Shutdown event: Not Supported 00:11:04.484 Zone Descriptor Change Notices: Not Supported 00:11:04.484 Discovery Log Change Notices: Not Supported 00:11:04.484 Controller Attributes 00:11:04.484 128-bit Host Identifier: Not Supported 00:11:04.484 Non-Operational Permissive Mode: Not Supported 00:11:04.484 NVM Sets: Not Supported 00:11:04.484 Read Recovery Levels: Not Supported 00:11:04.484 Endurance Groups: Not Supported 00:11:04.484 Predictable Latency Mode: Not Supported 00:11:04.484 Traffic Based Keep ALive: Not Supported 00:11:04.484 Namespace Granularity: Not Supported 00:11:04.484 SQ Associations: Not Supported 00:11:04.484 UUID List: Not Supported 00:11:04.484 Multi-Domain Subsystem: Not Supported 00:11:04.484 Fixed Capacity Management: Not Supported 00:11:04.484 Variable Capacity Management: Not Supported 00:11:04.484 Delete Endurance Group: Not Supported 00:11:04.484 Delete NVM Set: Not Supported 00:11:04.484 Extended LBA Formats Supported: Supported 00:11:04.484 Flexible Data Placement Supported: Not Supported 00:11:04.484 00:11:04.484 Controller Memory Buffer Support 00:11:04.484 ================================ 00:11:04.484 Supported: No 00:11:04.484 00:11:04.484 Persistent Memory Region Support 00:11:04.484 ================================ 00:11:04.484 Supported: No 00:11:04.484 00:11:04.484 Admin Command Set Attributes 00:11:04.484 ============================ 00:11:04.484 Security Send/Receive: Not Supported 00:11:04.484 Format NVM: Supported 00:11:04.484 Firmware Activate/Download: Not Supported 00:11:04.484 Namespace Management: Supported 00:11:04.484 Device Self-Test: Not Supported 00:11:04.484 Directives: Supported 00:11:04.484 NVMe-MI: Not Supported 00:11:04.484 Virtualization Management: Not Supported 00:11:04.484 Doorbell Buffer Config: Supported 00:11:04.484 Get LBA Status Capability: Not Supported 00:11:04.484 Command & Feature Lockdown Capability: Not Supported 00:11:04.484 Abort Command Limit: 4 00:11:04.484 Async Event Request Limit: 4 00:11:04.484 Number of Firmware Slots: N/A 00:11:04.484 Firmware Slot 1 Read-Only: N/A 00:11:04.484 Firmware Activation Without Reset: N/A 00:11:04.484 Multiple Update Detection Support: N/A 00:11:04.484 Firmware Update Granularity: No Information Provided 00:11:04.484 Per-Namespace SMART Log: Yes 00:11:04.484 Asymmetric Namespace Access Log Page: Not Supported 00:11:04.484 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:04.484 Command Effects Log Page: Supported 00:11:04.484 Get Log Page Extended Data: Supported 00:11:04.484 Telemetry Log Pages: Not Supported 00:11:04.484 Persistent Event Log Pages: Not Supported 00:11:04.484 Supported Log Pages Log Page: May Support 00:11:04.484 Commands Supported & Effects Log Page: Not Supported 00:11:04.484 Feature Identifiers & Effects Log Page:May Support 00:11:04.484 NVMe-MI Commands & Effects Log Page: May Support 00:11:04.484 Data Area 4 for Telemetry Log: Not Supported 00:11:04.484 Error Log Page Entries Supported: 1 00:11:04.484 Keep Alive: Not Supported 00:11:04.484 00:11:04.484 NVM Command Set Attributes 00:11:04.484 ========================== 00:11:04.484 Submission Queue Entry Size 00:11:04.484 Max: 64 00:11:04.484 Min: 64 00:11:04.484 Completion Queue Entry Size 00:11:04.484 Max: 16 00:11:04.484 Min: 16 00:11:04.484 Number of Namespaces: 256 00:11:04.484 Compare Command: Supported 00:11:04.484 Write Uncorrectable Command: Not Supported 00:11:04.484 Dataset Management Command: Supported 00:11:04.484 Write Zeroes Command: Supported 00:11:04.484 Set Features Save Field: Supported 00:11:04.484 Reservations: Not Supported 00:11:04.484 Timestamp: Supported 00:11:04.484 Copy: Supported 00:11:04.484 Volatile Write Cache: Present 00:11:04.484 Atomic Write Unit (Normal): 1 00:11:04.484 Atomic Write Unit (PFail): 1 00:11:04.484 Atomic Compare & Write Unit: 1 00:11:04.484 Fused Compare & Write: Not Supported 00:11:04.484 Scatter-Gather List 00:11:04.484 SGL Command Set: Supported 00:11:04.484 SGL Keyed: Not Supported 00:11:04.484 SGL Bit Bucket Descriptor: Not Supported 00:11:04.484 SGL Metadata Pointer: Not Supported 00:11:04.484 Oversized SGL: Not Supported 00:11:04.484 SGL Metadata Address: Not Supported 00:11:04.484 SGL Offset: Not Supported 00:11:04.484 Transport SGL Data Block: Not Supported 00:11:04.484 Replay Protected Memory Block: Not Supported 00:11:04.484 00:11:04.484 Firmware Slot Information 00:11:04.484 ========================= 00:11:04.484 Active slot: 1 00:11:04.484 Slot 1 Firmware Revision: 1.0 00:11:04.484 00:11:04.484 00:11:04.484 Commands Supported and Effects 00:11:04.484 ============================== 00:11:04.484 Admin Commands 00:11:04.484 -------------- 00:11:04.484 Delete I/O Submission Queue (00h): Supported 00:11:04.484 Create I/O Submission Queue (01h): Supported 00:11:04.484 Get Log Page (02h): Supported 00:11:04.484 Delete I/O Completion Queue (04h): Supported 00:11:04.484 Create I/O Completion Queue (05h): Supported 00:11:04.484 Identify (06h): Supported 00:11:04.484 Abort (08h): Supported 00:11:04.484 Set Features (09h): Supported 00:11:04.484 Get Features (0Ah): Supported 00:11:04.484 Asynchronous Event Request (0Ch): Supported 00:11:04.484 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:04.484 Directive Send (19h): Supported 00:11:04.484 Directive Receive (1Ah): Supported 00:11:04.484 Virtualization Management (1Ch): Supported 00:11:04.484 Doorbell Buffer Config (7Ch): Supported 00:11:04.484 Format NVM (80h): Supported LBA-Change 00:11:04.484 I/O Commands 00:11:04.484 ------------ 00:11:04.484 Flush (00h): Supported LBA-Change 00:11:04.484 Write (01h): Supported LBA-Change 00:11:04.484 Read (02h): Supported 00:11:04.484 Compare (05h): Supported 00:11:04.484 Write Zeroes (08h): Supported LBA-Change 00:11:04.484 Dataset Management (09h): Supported LBA-Change 00:11:04.484 Unknown (0Ch): Supported 00:11:04.484 Unknown (12h): Supported 00:11:04.484 Copy (19h): Supported LBA-Change 00:11:04.484 Unknown (1Dh): Supported LBA-Change 00:11:04.484 00:11:04.484 Error Log 00:11:04.484 ========= 00:11:04.484 00:11:04.484 Arbitration 00:11:04.484 =========== 00:11:04.484 Arbitration Burst: no limit 00:11:04.484 00:11:04.484 Power Management 00:11:04.484 ================ 00:11:04.484 Number of Power States: 1 00:11:04.484 Current Power State: Power State #0 00:11:04.484 Power State #0: 00:11:04.484 Max Power: 25.00 W 00:11:04.484 Non-Operational State: Operational 00:11:04.484 Entry Latency: 16 microseconds 00:11:04.484 Exit Latency: 4 microseconds 00:11:04.485 Relative Read Throughput: 0 00:11:04.485 Relative Read Latency: 0 00:11:04.485 Relative Write Throughput: 0 00:11:04.485 Relative Write Latency: 0 00:11:04.744 Idle Power: Not Reported 00:11:04.744 Active Power: Not Reported 00:11:04.744 Non-Operational Permissive Mode: Not Supported 00:11:04.744 00:11:04.744 Health Information 00:11:04.744 ================== 00:11:04.744 Critical Warnings: 00:11:04.744 Available Spare Space: OK 00:11:04.744 Temperature: OK 00:11:04.744 Device Reliability: OK 00:11:04.744 Read Only: No 00:11:04.744 Volatile Memory Backup: OK 00:11:04.744 Current Temperature: 323 Kelvin (50 Celsius) 00:11:04.744 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:04.744 Available Spare: 0% 00:11:04.744 Available Spare Threshold: 0% 00:11:04.744 Life Percentage Used: 0% 00:11:04.744 Data Units Read: 21447 00:11:04.744 Data Units Written: 10788 00:11:04.744 Host Read Commands: 509201 00:11:04.744 Host Write Commands: 255472 00:11:04.744 Controller Busy Time: 0 minutes 00:11:04.744 Power Cycles: 0 00:11:04.744 Power On Hours: 0 hours 00:11:04.744 Unsafe Shutdowns: 0 00:11:04.744 Unrecoverable Media Errors: 0 00:11:04.744 Lifetime Error Log Entries: 0 00:11:04.744 Warning Temperature Time: 0 minutes 00:11:04.744 Critical Temperature Time: 0 minutes 00:11:04.744 00:11:04.744 Number of Queues 00:11:04.744 ================ 00:11:04.744 Number of I/O Submission Queues: 64 00:11:04.744 Number of I/O Completion Queues: 64 00:11:04.744 00:11:04.744 ZNS Specific Controller Data 00:11:04.744 ============================ 00:11:04.744 Zone Append Size Limit: 0 00:11:04.744 00:11:04.744 00:11:04.744 Active Namespaces 00:11:04.744 ================= 00:11:04.744 Namespace ID:1 00:11:04.744 Error Recovery Timeout: Unlimited 00:11:04.744 Command Set Identifier: NVM (00h) 00:11:04.744 Deallocate: Supported 00:11:04.744 Deallocated/Unwritten Error: Supported 00:11:04.744 Deallocated Read Value: All 0x00 00:11:04.744 Deallocate in Write Zeroes: Not Supported 00:11:04.744 Deallocated Guard Field: 0xFFFF 00:11:04.744 Flush: Supported 00:11:04.744 Reservation: Not Supported 00:11:04.744 Namespace Sharing Capabilities: Private 00:11:04.744 Size (in LBAs): 1310720 (5GiB) 00:11:04.744 Capacity (in LBAs): 1310720 (5GiB) 00:11:04.744 Utilization (in LBAs): 1310720 (5GiB) 00:11:04.744 Thin Provisioning: Not Supported 00:11:04.744 Per-NS Atomic Units: No 00:11:04.744 Maximum Single Source Range Length: 128 00:11:04.744 Maximum Copy Length: 128 00:11:04.744 Maximum Source Range Count: 128 00:11:04.744 NGUID/EUI64 Never Reused: No 00:11:04.744 Namespace Write Protected: No 00:11:04.744 Number of LBA Formats: 8 00:11:04.744 Current LBA Format: LBA Format #04 00:11:04.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:04.744 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:04.744 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:04.744 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:04.744 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:04.744 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:04.744 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:04.744 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:04.744 00:11:04.744 13:34:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:04.744 13:34:43 -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:05.003 EAL: TSC is not safe to use in SMP mode 00:11:05.003 EAL: TSC is not invariant 00:11:05.003 [2024-07-10 13:34:44.350885] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:05.003 ===================================================== 00:11:05.003 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:05.003 ===================================================== 00:11:05.003 Controller Capabilities/Features 00:11:05.003 ================================ 00:11:05.003 Vendor ID: 1b36 00:11:05.003 Subsystem Vendor ID: 1af4 00:11:05.003 Serial Number: 12340 00:11:05.003 Model Number: QEMU NVMe Ctrl 00:11:05.003 Firmware Version: 8.0.0 00:11:05.003 Recommended Arb Burst: 6 00:11:05.003 IEEE OUI Identifier: 00 54 52 00:11:05.003 Multi-path I/O 00:11:05.003 May have multiple subsystem ports: No 00:11:05.003 May have multiple controllers: No 00:11:05.003 Associated with SR-IOV VF: No 00:11:05.003 Max Data Transfer Size: 524288 00:11:05.003 Max Number of Namespaces: 256 00:11:05.003 Max Number of I/O Queues: 64 00:11:05.003 NVMe Specification Version (VS): 1.4 00:11:05.003 NVMe Specification Version (Identify): 1.4 00:11:05.003 Maximum Queue Entries: 2048 00:11:05.003 Contiguous Queues Required: Yes 00:11:05.003 Arbitration Mechanisms Supported 00:11:05.003 Weighted Round Robin: Not Supported 00:11:05.003 Vendor Specific: Not Supported 00:11:05.003 Reset Timeout: 7500 ms 00:11:05.003 Doorbell Stride: 4 bytes 00:11:05.003 NVM Subsystem Reset: Not Supported 00:11:05.003 Command Sets Supported 00:11:05.003 NVM Command Set: Supported 00:11:05.003 Boot Partition: Not Supported 00:11:05.003 Memory Page Size Minimum: 4096 bytes 00:11:05.003 Memory Page Size Maximum: 65536 bytes 00:11:05.003 Persistent Memory Region: Not Supported 00:11:05.003 Optional Asynchronous Events Supported 00:11:05.003 Namespace Attribute Notices: Supported 00:11:05.003 Firmware Activation Notices: Not Supported 00:11:05.003 ANA Change Notices: Not Supported 00:11:05.003 PLE Aggregate Log Change Notices: Not Supported 00:11:05.003 LBA Status Info Alert Notices: Not Supported 00:11:05.003 EGE Aggregate Log Change Notices: Not Supported 00:11:05.003 Normal NVM Subsystem Shutdown event: Not Supported 00:11:05.003 Zone Descriptor Change Notices: Not Supported 00:11:05.003 Discovery Log Change Notices: Not Supported 00:11:05.003 Controller Attributes 00:11:05.003 128-bit Host Identifier: Not Supported 00:11:05.003 Non-Operational Permissive Mode: Not Supported 00:11:05.004 NVM Sets: Not Supported 00:11:05.004 Read Recovery Levels: Not Supported 00:11:05.004 Endurance Groups: Not Supported 00:11:05.004 Predictable Latency Mode: Not Supported 00:11:05.004 Traffic Based Keep ALive: Not Supported 00:11:05.004 Namespace Granularity: Not Supported 00:11:05.004 SQ Associations: Not Supported 00:11:05.004 UUID List: Not Supported 00:11:05.004 Multi-Domain Subsystem: Not Supported 00:11:05.004 Fixed Capacity Management: Not Supported 00:11:05.004 Variable Capacity Management: Not Supported 00:11:05.004 Delete Endurance Group: Not Supported 00:11:05.004 Delete NVM Set: Not Supported 00:11:05.004 Extended LBA Formats Supported: Supported 00:11:05.004 Flexible Data Placement Supported: Not Supported 00:11:05.004 00:11:05.004 Controller Memory Buffer Support 00:11:05.004 ================================ 00:11:05.004 Supported: No 00:11:05.004 00:11:05.004 Persistent Memory Region Support 00:11:05.004 ================================ 00:11:05.004 Supported: No 00:11:05.004 00:11:05.004 Admin Command Set Attributes 00:11:05.004 ============================ 00:11:05.004 Security Send/Receive: Not Supported 00:11:05.004 Format NVM: Supported 00:11:05.004 Firmware Activate/Download: Not Supported 00:11:05.004 Namespace Management: Supported 00:11:05.004 Device Self-Test: Not Supported 00:11:05.004 Directives: Supported 00:11:05.004 NVMe-MI: Not Supported 00:11:05.004 Virtualization Management: Not Supported 00:11:05.004 Doorbell Buffer Config: Supported 00:11:05.004 Get LBA Status Capability: Not Supported 00:11:05.004 Command & Feature Lockdown Capability: Not Supported 00:11:05.004 Abort Command Limit: 4 00:11:05.004 Async Event Request Limit: 4 00:11:05.004 Number of Firmware Slots: N/A 00:11:05.004 Firmware Slot 1 Read-Only: N/A 00:11:05.004 Firmware Activation Without Reset: N/A 00:11:05.004 Multiple Update Detection Support: N/A 00:11:05.004 Firmware Update Granularity: No Information Provided 00:11:05.004 Per-Namespace SMART Log: Yes 00:11:05.004 Asymmetric Namespace Access Log Page: Not Supported 00:11:05.004 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:05.004 Command Effects Log Page: Supported 00:11:05.004 Get Log Page Extended Data: Supported 00:11:05.004 Telemetry Log Pages: Not Supported 00:11:05.004 Persistent Event Log Pages: Not Supported 00:11:05.004 Supported Log Pages Log Page: May Support 00:11:05.004 Commands Supported & Effects Log Page: Not Supported 00:11:05.004 Feature Identifiers & Effects Log Page:May Support 00:11:05.004 NVMe-MI Commands & Effects Log Page: May Support 00:11:05.004 Data Area 4 for Telemetry Log: Not Supported 00:11:05.004 Error Log Page Entries Supported: 1 00:11:05.004 Keep Alive: Not Supported 00:11:05.004 00:11:05.004 NVM Command Set Attributes 00:11:05.004 ========================== 00:11:05.004 Submission Queue Entry Size 00:11:05.004 Max: 64 00:11:05.004 Min: 64 00:11:05.004 Completion Queue Entry Size 00:11:05.004 Max: 16 00:11:05.004 Min: 16 00:11:05.004 Number of Namespaces: 256 00:11:05.004 Compare Command: Supported 00:11:05.004 Write Uncorrectable Command: Not Supported 00:11:05.004 Dataset Management Command: Supported 00:11:05.004 Write Zeroes Command: Supported 00:11:05.004 Set Features Save Field: Supported 00:11:05.004 Reservations: Not Supported 00:11:05.004 Timestamp: Supported 00:11:05.004 Copy: Supported 00:11:05.004 Volatile Write Cache: Present 00:11:05.004 Atomic Write Unit (Normal): 1 00:11:05.004 Atomic Write Unit (PFail): 1 00:11:05.004 Atomic Compare & Write Unit: 1 00:11:05.004 Fused Compare & Write: Not Supported 00:11:05.004 Scatter-Gather List 00:11:05.004 SGL Command Set: Supported 00:11:05.004 SGL Keyed: Not Supported 00:11:05.004 SGL Bit Bucket Descriptor: Not Supported 00:11:05.004 SGL Metadata Pointer: Not Supported 00:11:05.004 Oversized SGL: Not Supported 00:11:05.004 SGL Metadata Address: Not Supported 00:11:05.004 SGL Offset: Not Supported 00:11:05.004 Transport SGL Data Block: Not Supported 00:11:05.004 Replay Protected Memory Block: Not Supported 00:11:05.004 00:11:05.004 Firmware Slot Information 00:11:05.004 ========================= 00:11:05.004 Active slot: 1 00:11:05.004 Slot 1 Firmware Revision: 1.0 00:11:05.004 00:11:05.004 00:11:05.004 Commands Supported and Effects 00:11:05.004 ============================== 00:11:05.004 Admin Commands 00:11:05.004 -------------- 00:11:05.004 Delete I/O Submission Queue (00h): Supported 00:11:05.004 Create I/O Submission Queue (01h): Supported 00:11:05.004 Get Log Page (02h): Supported 00:11:05.004 Delete I/O Completion Queue (04h): Supported 00:11:05.004 Create I/O Completion Queue (05h): Supported 00:11:05.004 Identify (06h): Supported 00:11:05.004 Abort (08h): Supported 00:11:05.004 Set Features (09h): Supported 00:11:05.004 Get Features (0Ah): Supported 00:11:05.004 Asynchronous Event Request (0Ch): Supported 00:11:05.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:05.004 Directive Send (19h): Supported 00:11:05.004 Directive Receive (1Ah): Supported 00:11:05.004 Virtualization Management (1Ch): Supported 00:11:05.004 Doorbell Buffer Config (7Ch): Supported 00:11:05.004 Format NVM (80h): Supported LBA-Change 00:11:05.004 I/O Commands 00:11:05.004 ------------ 00:11:05.004 Flush (00h): Supported LBA-Change 00:11:05.004 Write (01h): Supported LBA-Change 00:11:05.004 Read (02h): Supported 00:11:05.004 Compare (05h): Supported 00:11:05.004 Write Zeroes (08h): Supported LBA-Change 00:11:05.004 Dataset Management (09h): Supported LBA-Change 00:11:05.004 Unknown (0Ch): Supported 00:11:05.004 Unknown (12h): Supported 00:11:05.004 Copy (19h): Supported LBA-Change 00:11:05.004 Unknown (1Dh): Supported LBA-Change 00:11:05.004 00:11:05.004 Error Log 00:11:05.004 ========= 00:11:05.004 00:11:05.004 Arbitration 00:11:05.004 =========== 00:11:05.004 Arbitration Burst: no limit 00:11:05.004 00:11:05.004 Power Management 00:11:05.004 ================ 00:11:05.004 Number of Power States: 1 00:11:05.004 Current Power State: Power State #0 00:11:05.004 Power State #0: 00:11:05.004 Max Power: 25.00 W 00:11:05.004 Non-Operational State: Operational 00:11:05.004 Entry Latency: 16 microseconds 00:11:05.004 Exit Latency: 4 microseconds 00:11:05.004 Relative Read Throughput: 0 00:11:05.004 Relative Read Latency: 0 00:11:05.004 Relative Write Throughput: 0 00:11:05.004 Relative Write Latency: 0 00:11:05.264 Idle Power: Not Reported 00:11:05.264 Active Power: Not Reported 00:11:05.264 Non-Operational Permissive Mode: Not Supported 00:11:05.264 00:11:05.264 Health Information 00:11:05.264 ================== 00:11:05.264 Critical Warnings: 00:11:05.264 Available Spare Space: OK 00:11:05.264 Temperature: OK 00:11:05.264 Device Reliability: OK 00:11:05.264 Read Only: No 00:11:05.264 Volatile Memory Backup: OK 00:11:05.264 Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.264 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:05.264 Available Spare: 0% 00:11:05.264 Available Spare Threshold: 0% 00:11:05.264 Life Percentage Used: 0% 00:11:05.264 Data Units Read: 21447 00:11:05.264 Data Units Written: 10788 00:11:05.264 Host Read Commands: 509201 00:11:05.264 Host Write Commands: 255472 00:11:05.264 Controller Busy Time: 0 minutes 00:11:05.264 Power Cycles: 0 00:11:05.264 Power On Hours: 0 hours 00:11:05.264 Unsafe Shutdowns: 0 00:11:05.264 Unrecoverable Media Errors: 0 00:11:05.264 Lifetime Error Log Entries: 0 00:11:05.264 Warning Temperature Time: 0 minutes 00:11:05.264 Critical Temperature Time: 0 minutes 00:11:05.264 00:11:05.264 Number of Queues 00:11:05.264 ================ 00:11:05.264 Number of I/O Submission Queues: 64 00:11:05.264 Number of I/O Completion Queues: 64 00:11:05.264 00:11:05.264 ZNS Specific Controller Data 00:11:05.264 ============================ 00:11:05.264 Zone Append Size Limit: 0 00:11:05.264 00:11:05.264 00:11:05.264 Active Namespaces 00:11:05.264 ================= 00:11:05.264 Namespace ID:1 00:11:05.264 Error Recovery Timeout: Unlimited 00:11:05.264 Command Set Identifier: NVM (00h) 00:11:05.264 Deallocate: Supported 00:11:05.264 Deallocated/Unwritten Error: Supported 00:11:05.264 Deallocated Read Value: All 0x00 00:11:05.264 Deallocate in Write Zeroes: Not Supported 00:11:05.264 Deallocated Guard Field: 0xFFFF 00:11:05.264 Flush: Supported 00:11:05.264 Reservation: Not Supported 00:11:05.264 Namespace Sharing Capabilities: Private 00:11:05.264 Size (in LBAs): 1310720 (5GiB) 00:11:05.264 Capacity (in LBAs): 1310720 (5GiB) 00:11:05.264 Utilization (in LBAs): 1310720 (5GiB) 00:11:05.264 Thin Provisioning: Not Supported 00:11:05.264 Per-NS Atomic Units: No 00:11:05.264 Maximum Single Source Range Length: 128 00:11:05.264 Maximum Copy Length: 128 00:11:05.264 Maximum Source Range Count: 128 00:11:05.264 NGUID/EUI64 Never Reused: No 00:11:05.264 Namespace Write Protected: No 00:11:05.264 Number of LBA Formats: 8 00:11:05.264 Current LBA Format: LBA Format #04 00:11:05.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:05.264 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:05.264 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:05.264 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:05.264 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:05.264 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:05.264 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:05.264 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:05.264 00:11:05.264 00:11:05.264 real 0m1.087s 00:11:05.264 user 0m0.091s 00:11:05.264 sys 0m1.022s 00:11:05.264 13:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.264 13:34:44 -- common/autotest_common.sh@10 -- # set +x 00:11:05.264 ************************************ 00:11:05.264 END TEST nvme_identify 00:11:05.264 ************************************ 00:11:05.264 13:34:44 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:05.264 13:34:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:05.264 13:34:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.264 13:34:44 -- common/autotest_common.sh@10 -- # set +x 00:11:05.264 ************************************ 00:11:05.264 START TEST nvme_perf 00:11:05.264 ************************************ 00:11:05.264 13:34:44 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:05.264 13:34:44 -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:05.832 EAL: TSC is not safe to use in SMP mode 00:11:05.832 EAL: TSC is not invariant 00:11:05.832 [2024-07-10 13:34:44.914936] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:06.769 Initializing NVMe Controllers 00:11:06.769 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:06.769 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:06.769 Initialization complete. Launching workers. 00:11:06.769 ======================================================== 00:11:06.769 Latency(us) 00:11:06.769 Device Information : IOPS MiB/s Average min max 00:11:06.769 PCIE (0000:00:06.0) NSID 1 from core 0: 102270.99 1198.49 1251.39 270.43 4026.11 00:11:06.769 ======================================================== 00:11:06.769 Total : 102270.99 1198.49 1251.39 270.43 4026.11 00:11:06.770 00:11:06.770 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:06.770 ================================================================================= 00:11:06.770 1.00000% : 1035.330us 00:11:06.770 10.00000% : 1099.592us 00:11:06.770 25.00000% : 1142.433us 00:11:06.770 50.00000% : 1206.695us 00:11:06.770 75.00000% : 1306.658us 00:11:06.770 90.00000% : 1442.322us 00:11:06.770 95.00000% : 1535.145us 00:11:06.770 98.00000% : 1792.192us 00:11:06.770 99.00000% : 2099.221us 00:11:06.770 99.50000% : 2427.671us 00:11:06.770 99.90000% : 3413.020us 00:11:06.770 99.99000% : 3941.395us 00:11:06.770 99.99900% : 4027.078us 00:11:06.770 99.99990% : 4027.078us 00:11:06.770 99.99999% : 4027.078us 00:11:06.770 00:11:06.770 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:06.770 ============================================================================== 00:11:06.770 Range in us Cumulative IO count 00:11:06.770 269.543 - 271.328: 0.0010% ( 1) 00:11:06.770 271.328 - 273.113: 0.0020% ( 1) 00:11:06.770 273.113 - 274.898: 0.0039% ( 2) 00:11:06.770 274.898 - 276.683: 0.0059% ( 2) 00:11:06.770 276.683 - 278.468: 0.0068% ( 1) 00:11:06.770 278.468 - 280.253: 0.0088% ( 2) 00:11:06.770 280.253 - 282.038: 0.0107% ( 2) 00:11:06.770 282.038 - 283.823: 0.0117% ( 1) 00:11:06.770 283.823 - 285.608: 0.0137% ( 2) 00:11:06.770 285.608 - 287.393: 0.0147% ( 1) 00:11:06.770 305.244 - 307.029: 0.0166% ( 2) 00:11:06.770 307.029 - 308.814: 0.0195% ( 3) 00:11:06.770 308.814 - 310.599: 0.0205% ( 1) 00:11:06.770 310.599 - 312.384: 0.0225% ( 2) 00:11:06.770 312.384 - 314.169: 0.0235% ( 1) 00:11:06.770 314.169 - 315.954: 0.0254% ( 2) 00:11:06.770 315.954 - 317.739: 0.0264% ( 1) 00:11:06.770 317.739 - 319.524: 0.0283% ( 2) 00:11:06.770 828.264 - 831.834: 0.0293% ( 1) 00:11:06.770 838.974 - 842.545: 0.0303% ( 1) 00:11:06.770 842.545 - 846.115: 0.0322% ( 2) 00:11:06.770 846.115 - 849.685: 0.0342% ( 2) 00:11:06.770 849.685 - 853.255: 0.0362% ( 2) 00:11:06.770 853.255 - 856.825: 0.0381% ( 2) 00:11:06.770 856.825 - 860.395: 0.0401% ( 2) 00:11:06.770 860.395 - 863.965: 0.0420% ( 2) 00:11:06.770 863.965 - 867.535: 0.0440% ( 2) 00:11:06.770 867.535 - 871.105: 0.0459% ( 2) 00:11:06.770 871.105 - 874.676: 0.0479% ( 2) 00:11:06.770 874.676 - 878.246: 0.0498% ( 2) 00:11:06.770 878.246 - 881.816: 0.0518% ( 2) 00:11:06.770 881.816 - 885.386: 0.0537% ( 2) 00:11:06.770 885.386 - 888.956: 0.0557% ( 2) 00:11:06.770 888.956 - 892.526: 0.0577% ( 2) 00:11:06.770 892.526 - 896.096: 0.0596% ( 2) 00:11:06.770 896.096 - 899.666: 0.0625% ( 3) 00:11:06.770 899.666 - 903.236: 0.0674% ( 5) 00:11:06.770 903.236 - 906.806: 0.0723% ( 5) 00:11:06.770 906.806 - 910.377: 0.0782% ( 6) 00:11:06.770 910.377 - 913.947: 0.0821% ( 4) 00:11:06.770 913.947 - 921.087: 0.0880% ( 6) 00:11:06.770 921.087 - 928.227: 0.0948% ( 7) 00:11:06.770 928.227 - 935.367: 0.1026% ( 8) 00:11:06.770 935.367 - 942.508: 0.1094% ( 7) 00:11:06.770 942.508 - 949.648: 0.1192% ( 10) 00:11:06.770 949.648 - 956.788: 0.1280% ( 9) 00:11:06.770 956.788 - 963.928: 0.1407% ( 13) 00:11:06.770 963.928 - 971.068: 0.1612% ( 21) 00:11:06.770 971.068 - 978.209: 0.1867% ( 26) 00:11:06.770 978.209 - 985.349: 0.2121% ( 26) 00:11:06.770 985.349 - 992.489: 0.2482% ( 37) 00:11:06.770 992.489 - 999.629: 0.2951% ( 48) 00:11:06.770 999.629 - 1006.769: 0.3586% ( 65) 00:11:06.770 1006.769 - 1013.910: 0.4476% ( 91) 00:11:06.770 1013.910 - 1021.050: 0.5717% ( 127) 00:11:06.770 1021.050 - 1028.190: 0.7437% ( 176) 00:11:06.770 1028.190 - 1035.330: 1.0075% ( 270) 00:11:06.770 1035.330 - 1042.470: 1.4121% ( 414) 00:11:06.770 1042.470 - 1049.611: 1.9154% ( 515) 00:11:06.770 1049.611 - 1056.751: 2.6336% ( 735) 00:11:06.770 1056.751 - 1063.891: 3.4965% ( 883) 00:11:06.770 1063.891 - 1071.031: 4.5793% ( 1108) 00:11:06.770 1071.031 - 1078.171: 5.8380% ( 1288) 00:11:06.770 1078.171 - 1085.312: 7.2696% ( 1465) 00:11:06.770 1085.312 - 1092.452: 8.9260% ( 1695) 00:11:06.770 1092.452 - 1099.592: 10.8111% ( 1929) 00:11:06.770 1099.592 - 1106.732: 12.9053% ( 2143) 00:11:06.770 1106.732 - 1113.873: 15.1881% ( 2336) 00:11:06.770 1113.873 - 1121.013: 17.6586% ( 2528) 00:11:06.770 1121.013 - 1128.153: 20.2326% ( 2634) 00:11:06.770 1128.153 - 1135.293: 22.8916% ( 2721) 00:11:06.770 1135.293 - 1142.433: 25.5966% ( 2768) 00:11:06.770 1142.433 - 1149.574: 28.3973% ( 2866) 00:11:06.770 1149.574 - 1156.714: 31.2411% ( 2910) 00:11:06.770 1156.714 - 1163.854: 34.1083% ( 2934) 00:11:06.770 1163.854 - 1170.994: 36.9520% ( 2910) 00:11:06.770 1170.994 - 1178.134: 39.7635% ( 2877) 00:11:06.770 1178.134 - 1185.275: 42.5447% ( 2846) 00:11:06.770 1185.275 - 1192.415: 45.2028% ( 2720) 00:11:06.770 1192.415 - 1199.555: 47.7807% ( 2638) 00:11:06.770 1199.555 - 1206.695: 50.2521% ( 2529) 00:11:06.770 1206.695 - 1213.835: 52.6288% ( 2432) 00:11:06.770 1213.835 - 1220.976: 54.9116% ( 2336) 00:11:06.770 1220.976 - 1228.116: 57.1103% ( 2250) 00:11:06.770 1228.116 - 1235.256: 59.2124% ( 2151) 00:11:06.770 1235.256 - 1242.396: 61.2127% ( 2047) 00:11:06.770 1242.396 - 1249.536: 63.1095% ( 1941) 00:11:06.770 1249.536 - 1256.677: 64.9204% ( 1853) 00:11:06.770 1256.677 - 1263.817: 66.6569% ( 1777) 00:11:06.770 1263.817 - 1270.957: 68.3416% ( 1724) 00:11:06.770 1270.957 - 1278.097: 69.9521% ( 1648) 00:11:06.770 1278.097 - 1285.238: 71.4893% ( 1573) 00:11:06.770 1285.238 - 1292.378: 72.9796% ( 1525) 00:11:06.770 1292.378 - 1299.518: 74.3985% ( 1452) 00:11:06.770 1299.518 - 1306.658: 75.7363% ( 1369) 00:11:06.770 1306.658 - 1313.798: 77.0058% ( 1299) 00:11:06.770 1313.798 - 1320.939: 78.1667% ( 1188) 00:11:06.770 1320.939 - 1328.079: 79.2612% ( 1120) 00:11:06.770 1328.079 - 1335.219: 80.3029% ( 1066) 00:11:06.770 1335.219 - 1342.359: 81.2909% ( 1011) 00:11:06.770 1342.359 - 1349.499: 82.1919% ( 922) 00:11:06.770 1349.499 - 1356.640: 83.0157% ( 843) 00:11:06.770 1356.640 - 1363.780: 83.7887% ( 791) 00:11:06.770 1363.780 - 1370.920: 84.4728% ( 700) 00:11:06.770 1370.920 - 1378.060: 85.1344% ( 677) 00:11:06.770 1378.060 - 1385.200: 85.7793% ( 660) 00:11:06.770 1385.200 - 1392.341: 86.3940% ( 629) 00:11:06.770 1392.341 - 1399.481: 86.9735% ( 593) 00:11:06.770 1399.481 - 1406.621: 87.5491% ( 589) 00:11:06.770 1406.621 - 1413.761: 88.0973% ( 561) 00:11:06.770 1413.761 - 1420.901: 88.6289% ( 544) 00:11:06.770 1420.901 - 1428.042: 89.1527% ( 536) 00:11:06.770 1428.042 - 1435.182: 89.6521% ( 511) 00:11:06.770 1435.182 - 1442.322: 90.1329% ( 492) 00:11:06.770 1442.322 - 1449.462: 90.6049% ( 483) 00:11:06.770 1449.462 - 1456.603: 91.0583% ( 464) 00:11:06.770 1456.603 - 1463.743: 91.5235% ( 476) 00:11:06.770 1463.743 - 1470.883: 91.9642% ( 451) 00:11:06.770 1470.883 - 1478.023: 92.3884% ( 434) 00:11:06.771 1478.023 - 1485.163: 92.7861% ( 407) 00:11:06.771 1485.163 - 1492.304: 93.1574% ( 380) 00:11:06.771 1492.304 - 1499.444: 93.5210% ( 372) 00:11:06.771 1499.444 - 1506.584: 93.8698% ( 357) 00:11:06.771 1506.584 - 1513.724: 94.2158% ( 354) 00:11:06.771 1513.724 - 1520.864: 94.5549% ( 347) 00:11:06.771 1520.864 - 1528.005: 94.8793% ( 332) 00:11:06.771 1528.005 - 1535.145: 95.1862% ( 314) 00:11:06.771 1535.145 - 1542.285: 95.4686% ( 289) 00:11:06.771 1542.285 - 1549.425: 95.7217% ( 259) 00:11:06.771 1549.425 - 1556.565: 95.9592% ( 243) 00:11:06.771 1556.565 - 1563.706: 96.1566% ( 202) 00:11:06.771 1563.706 - 1570.846: 96.3520% ( 200) 00:11:06.771 1570.846 - 1577.986: 96.5308% ( 183) 00:11:06.771 1577.986 - 1585.126: 96.6891% ( 162) 00:11:06.771 1585.126 - 1592.266: 96.8338% ( 148) 00:11:06.771 1592.266 - 1599.407: 96.9637% ( 133) 00:11:06.771 1599.407 - 1606.547: 97.0839% ( 123) 00:11:06.771 1606.547 - 1613.687: 97.1924% ( 111) 00:11:06.771 1613.687 - 1620.827: 97.2911% ( 101) 00:11:06.771 1620.827 - 1627.968: 97.3761% ( 87) 00:11:06.771 1627.968 - 1635.108: 97.4514% ( 77) 00:11:06.771 1635.108 - 1642.248: 97.5178% ( 68) 00:11:06.771 1642.248 - 1649.388: 97.5814% ( 65) 00:11:06.771 1649.388 - 1656.528: 97.6322% ( 52) 00:11:06.771 1656.528 - 1663.669: 97.6722% ( 41) 00:11:06.771 1663.669 - 1670.809: 97.7133% ( 42) 00:11:06.771 1670.809 - 1677.949: 97.7465% ( 34) 00:11:06.771 1677.949 - 1685.089: 97.7768% ( 31) 00:11:06.771 1685.089 - 1692.229: 97.8003% ( 24) 00:11:06.771 1692.229 - 1699.370: 97.8188% ( 19) 00:11:06.771 1699.370 - 1706.510: 97.8374% ( 19) 00:11:06.771 1706.510 - 1713.650: 97.8550% ( 18) 00:11:06.771 1713.650 - 1720.790: 97.8765% ( 22) 00:11:06.771 1720.790 - 1727.930: 97.8911% ( 15) 00:11:06.771 1727.930 - 1735.071: 97.9087% ( 18) 00:11:06.771 1735.071 - 1742.211: 97.9244% ( 16) 00:11:06.771 1742.211 - 1749.351: 97.9400% ( 16) 00:11:06.771 1749.351 - 1756.491: 97.9537% ( 14) 00:11:06.771 1756.491 - 1763.631: 97.9664% ( 13) 00:11:06.771 1763.631 - 1770.772: 97.9771% ( 11) 00:11:06.771 1770.772 - 1777.912: 97.9859% ( 9) 00:11:06.771 1777.912 - 1785.052: 97.9967% ( 11) 00:11:06.771 1785.052 - 1792.192: 98.0084% ( 12) 00:11:06.771 1792.192 - 1799.333: 98.0250% ( 17) 00:11:06.771 1799.333 - 1806.473: 98.0377% ( 13) 00:11:06.771 1806.473 - 1813.613: 98.0465% ( 9) 00:11:06.771 1813.613 - 1820.753: 98.0582% ( 12) 00:11:06.771 1820.753 - 1827.893: 98.0739% ( 16) 00:11:06.771 1827.893 - 1842.174: 98.1032% ( 30) 00:11:06.771 1842.174 - 1856.454: 98.1413% ( 39) 00:11:06.771 1856.454 - 1870.735: 98.1902% ( 50) 00:11:06.771 1870.735 - 1885.015: 98.2439% ( 55) 00:11:06.771 1885.015 - 1899.295: 98.2918% ( 49) 00:11:06.771 1899.295 - 1913.576: 98.3446% ( 54) 00:11:06.771 1913.576 - 1927.856: 98.3866% ( 43) 00:11:06.771 1927.856 - 1942.137: 98.4179% ( 32) 00:11:06.771 1942.137 - 1956.417: 98.4560% ( 39) 00:11:06.771 1956.417 - 1970.698: 98.4931% ( 38) 00:11:06.771 1970.698 - 1984.978: 98.5322% ( 40) 00:11:06.771 1984.978 - 1999.258: 98.5674% ( 36) 00:11:06.771 1999.258 - 2013.539: 98.6065% ( 40) 00:11:06.771 2013.539 - 2027.819: 98.6553% ( 50) 00:11:06.771 2027.819 - 2042.100: 98.7140% ( 60) 00:11:06.771 2042.100 - 2056.380: 98.7863% ( 74) 00:11:06.771 2056.380 - 2070.660: 98.8615% ( 77) 00:11:06.771 2070.660 - 2084.941: 98.9348% ( 75) 00:11:06.771 2084.941 - 2099.221: 99.0042% ( 71) 00:11:06.771 2099.221 - 2113.502: 99.0628% ( 60) 00:11:06.771 2113.502 - 2127.782: 99.1137% ( 52) 00:11:06.771 2127.782 - 2142.063: 99.1606% ( 48) 00:11:06.771 2142.063 - 2156.343: 99.2006% ( 41) 00:11:06.771 2156.343 - 2170.623: 99.2290% ( 29) 00:11:06.771 2170.623 - 2184.904: 99.2573% ( 29) 00:11:06.771 2184.904 - 2199.184: 99.2847% ( 28) 00:11:06.771 2199.184 - 2213.465: 99.3130% ( 29) 00:11:06.771 2213.465 - 2227.745: 99.3394% ( 27) 00:11:06.771 2227.745 - 2242.025: 99.3677% ( 29) 00:11:06.771 2242.025 - 2256.306: 99.3931% ( 26) 00:11:06.771 2256.306 - 2270.586: 99.4205% ( 28) 00:11:06.771 2270.586 - 2284.867: 99.4352% ( 15) 00:11:06.771 2284.867 - 2299.147: 99.4449% ( 10) 00:11:06.771 2299.147 - 2313.428: 99.4508% ( 6) 00:11:06.771 2313.428 - 2327.708: 99.4567% ( 6) 00:11:06.771 2327.708 - 2341.988: 99.4615% ( 5) 00:11:06.771 2341.988 - 2356.269: 99.4664% ( 5) 00:11:06.771 2356.269 - 2370.549: 99.4703% ( 4) 00:11:06.771 2370.549 - 2384.830: 99.4752% ( 5) 00:11:06.771 2384.830 - 2399.110: 99.4821% ( 7) 00:11:06.771 2399.110 - 2413.390: 99.4918% ( 10) 00:11:06.771 2413.390 - 2427.671: 99.5036% ( 12) 00:11:06.771 2427.671 - 2441.951: 99.5143% ( 11) 00:11:06.771 2441.951 - 2456.232: 99.5241% ( 10) 00:11:06.771 2456.232 - 2470.512: 99.5319% ( 8) 00:11:06.771 2470.512 - 2484.793: 99.5387% ( 7) 00:11:06.771 2484.793 - 2499.073: 99.5446% ( 6) 00:11:06.771 2499.073 - 2513.353: 99.5515% ( 7) 00:11:06.771 2513.353 - 2527.634: 99.5573% ( 6) 00:11:06.771 2527.634 - 2541.914: 99.5632% ( 6) 00:11:06.771 2541.914 - 2556.195: 99.5690% ( 6) 00:11:06.771 2556.195 - 2570.475: 99.5759% ( 7) 00:11:06.771 2570.475 - 2584.755: 99.5866% ( 11) 00:11:06.771 2584.755 - 2599.036: 99.5944% ( 8) 00:11:06.771 2599.036 - 2613.316: 99.6003% ( 6) 00:11:06.771 2613.316 - 2627.597: 99.6052% ( 5) 00:11:06.771 2627.597 - 2641.877: 99.6101% ( 5) 00:11:06.771 2641.877 - 2656.158: 99.6150% ( 5) 00:11:06.771 2656.158 - 2670.438: 99.6199% ( 5) 00:11:06.771 2670.438 - 2684.718: 99.6247% ( 5) 00:11:06.771 2684.718 - 2698.999: 99.6306% ( 6) 00:11:06.771 2698.999 - 2713.279: 99.6355% ( 5) 00:11:06.771 2713.279 - 2727.560: 99.6433% ( 8) 00:11:06.771 2727.560 - 2741.840: 99.6550% ( 12) 00:11:06.771 2741.840 - 2756.120: 99.6658% ( 11) 00:11:06.771 2756.120 - 2770.401: 99.6746% ( 9) 00:11:06.771 2770.401 - 2784.681: 99.6853% ( 11) 00:11:06.771 2784.681 - 2798.962: 99.6951% ( 10) 00:11:06.771 2798.962 - 2813.242: 99.7049% ( 10) 00:11:06.771 2813.242 - 2827.523: 99.7146% ( 10) 00:11:06.771 2827.523 - 2841.803: 99.7264% ( 12) 00:11:06.771 2841.803 - 2856.083: 99.7361% ( 10) 00:11:06.771 2856.083 - 2870.364: 99.7459% ( 10) 00:11:06.771 2870.364 - 2884.644: 99.7557% ( 10) 00:11:06.771 2884.644 - 2898.925: 99.7655% ( 10) 00:11:06.771 2898.925 - 2913.205: 99.7762% ( 11) 00:11:06.771 2913.205 - 2927.485: 99.7850% ( 9) 00:11:06.771 2927.485 - 2941.766: 99.7899% ( 5) 00:11:06.771 2941.766 - 2956.046: 99.7958% ( 6) 00:11:06.771 2956.046 - 2970.327: 99.8006% ( 5) 00:11:06.771 2970.327 - 2984.607: 99.8055% ( 5) 00:11:06.771 2984.607 - 2998.888: 99.8114% ( 6) 00:11:06.771 2998.888 - 3013.168: 99.8163% ( 5) 00:11:06.771 3013.168 - 3027.448: 99.8212% ( 5) 00:11:06.771 3027.448 - 3041.729: 99.8270% ( 6) 00:11:06.771 3041.729 - 3056.009: 99.8319% ( 5) 00:11:06.771 3056.009 - 3070.290: 99.8348% ( 3) 00:11:06.771 3113.131 - 3127.411: 99.8358% ( 1) 00:11:06.771 3141.692 - 3155.972: 99.8388% ( 3) 00:11:06.771 3155.972 - 3170.253: 99.8407% ( 2) 00:11:06.771 3170.253 - 3184.533: 99.8436% ( 3) 00:11:06.771 3184.533 - 3198.813: 99.8456% ( 2) 00:11:06.771 3198.813 - 3213.094: 99.8476% ( 2) 00:11:06.771 3213.094 - 3227.374: 99.8495% ( 2) 00:11:06.771 3227.374 - 3241.655: 99.8515% ( 2) 00:11:06.772 3241.655 - 3255.935: 99.8534% ( 2) 00:11:06.772 3255.935 - 3270.215: 99.8554% ( 2) 00:11:06.772 3270.215 - 3284.496: 99.8573% ( 2) 00:11:06.772 3284.496 - 3298.776: 99.8593% ( 2) 00:11:06.772 3298.776 - 3313.057: 99.8612% ( 2) 00:11:06.772 3313.057 - 3327.337: 99.8642% ( 3) 00:11:06.772 3327.337 - 3341.618: 99.8661% ( 2) 00:11:06.772 3341.618 - 3355.898: 99.8720% ( 6) 00:11:06.772 3355.898 - 3370.178: 99.8798% ( 8) 00:11:06.772 3370.178 - 3384.459: 99.8886% ( 9) 00:11:06.772 3384.459 - 3398.739: 99.8984% ( 10) 00:11:06.772 3398.739 - 3413.020: 99.9072% ( 9) 00:11:06.772 3413.020 - 3427.300: 99.9160% ( 9) 00:11:06.772 3427.300 - 3441.580: 99.9208% ( 5) 00:11:06.772 3441.580 - 3455.861: 99.9228% ( 2) 00:11:06.772 3470.141 - 3484.422: 99.9248% ( 2) 00:11:06.772 3484.422 - 3498.702: 99.9267% ( 2) 00:11:06.772 3498.702 - 3512.983: 99.9296% ( 3) 00:11:06.772 3512.983 - 3527.263: 99.9316% ( 2) 00:11:06.772 3527.263 - 3541.543: 99.9335% ( 2) 00:11:06.772 3541.543 - 3555.824: 99.9355% ( 2) 00:11:06.772 3555.824 - 3570.104: 99.9375% ( 2) 00:11:06.772 3570.104 - 3584.385: 99.9394% ( 2) 00:11:06.772 3584.385 - 3598.665: 99.9423% ( 3) 00:11:06.772 3598.665 - 3612.945: 99.9443% ( 2) 00:11:06.772 3612.945 - 3627.226: 99.9463% ( 2) 00:11:06.772 3627.226 - 3641.506: 99.9472% ( 1) 00:11:06.772 3641.506 - 3655.787: 99.9492% ( 2) 00:11:06.772 3655.787 - 3684.348: 99.9541% ( 5) 00:11:06.772 3684.348 - 3712.908: 99.9580% ( 4) 00:11:06.772 3712.908 - 3741.469: 99.9619% ( 4) 00:11:06.772 3741.469 - 3770.030: 99.9668% ( 5) 00:11:06.772 3770.030 - 3798.591: 99.9707% ( 4) 00:11:06.772 3798.591 - 3827.152: 99.9746% ( 4) 00:11:06.772 3827.152 - 3855.713: 99.9785% ( 4) 00:11:06.772 3855.713 - 3884.273: 99.9834% ( 5) 00:11:06.772 3884.273 - 3912.834: 99.9873% ( 4) 00:11:06.772 3912.834 - 3941.395: 99.9922% ( 5) 00:11:06.772 3941.395 - 3969.956: 99.9961% ( 4) 00:11:06.772 3969.956 - 3998.517: 99.9980% ( 2) 00:11:06.772 3998.517 - 4027.078: 100.0000% ( 2) 00:11:06.772 00:11:06.772 13:34:45 -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:07.338 EAL: TSC is not safe to use in SMP mode 00:11:07.338 EAL: TSC is not invariant 00:11:07.338 [2024-07-10 13:34:46.412978] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:08.273 Initializing NVMe Controllers 00:11:08.274 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:08.274 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:08.274 Initialization complete. Launching workers. 00:11:08.274 ======================================================== 00:11:08.274 Latency(us) 00:11:08.274 Device Information : IOPS MiB/s Average min max 00:11:08.274 PCIE (0000:00:06.0) NSID 1 from core 0: 71233.62 834.77 1800.79 418.81 12836.78 00:11:08.274 ======================================================== 00:11:08.274 Total : 71233.62 834.77 1800.79 418.81 12836.78 00:11:08.274 00:11:08.274 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:08.274 ================================================================================= 00:11:08.274 1.00000% : 881.816us 00:11:08.274 10.00000% : 1092.452us 00:11:08.274 25.00000% : 1270.957us 00:11:08.274 50.00000% : 1570.846us 00:11:08.274 75.00000% : 2213.465us 00:11:08.274 90.00000% : 2813.242us 00:11:08.274 95.00000% : 3141.692us 00:11:08.274 98.00000% : 3655.787us 00:11:08.274 99.00000% : 4284.125us 00:11:08.274 99.50000% : 5312.315us 00:11:08.274 99.90000% : 7482.938us 00:11:08.274 99.99000% : 10053.413us 00:11:08.274 99.99900% : 12852.375us 00:11:08.274 99.99990% : 12852.375us 00:11:08.274 99.99999% : 12852.375us 00:11:08.274 00:11:08.274 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:08.274 ============================================================================== 00:11:08.274 Range in us Cumulative IO count 00:11:08.274 417.702 - 419.487: 0.0014% ( 1) 00:11:08.274 419.487 - 421.272: 0.0042% ( 2) 00:11:08.274 421.272 - 423.057: 0.0084% ( 3) 00:11:08.274 423.057 - 424.842: 0.0126% ( 3) 00:11:08.274 424.842 - 426.627: 0.0154% ( 2) 00:11:08.274 426.627 - 428.413: 0.0168% ( 1) 00:11:08.274 428.413 - 430.198: 0.0182% ( 1) 00:11:08.274 430.198 - 431.983: 0.0196% ( 1) 00:11:08.274 431.983 - 433.768: 0.0211% ( 1) 00:11:08.274 433.768 - 435.553: 0.0225% ( 1) 00:11:08.274 435.553 - 437.338: 0.0239% ( 1) 00:11:08.274 437.338 - 439.123: 0.0267% ( 2) 00:11:08.274 439.123 - 440.908: 0.0281% ( 1) 00:11:08.274 440.908 - 442.693: 0.0295% ( 1) 00:11:08.274 524.805 - 528.375: 0.0323% ( 2) 00:11:08.274 528.375 - 531.946: 0.0337% ( 1) 00:11:08.274 531.946 - 535.516: 0.0365% ( 2) 00:11:08.274 535.516 - 539.086: 0.0379% ( 1) 00:11:08.274 539.086 - 542.656: 0.0407% ( 2) 00:11:08.274 542.656 - 546.226: 0.0421% ( 1) 00:11:08.274 546.226 - 549.796: 0.0435% ( 1) 00:11:08.274 592.637 - 596.207: 0.0449% ( 1) 00:11:08.274 599.778 - 603.348: 0.0463% ( 1) 00:11:08.274 603.348 - 606.918: 0.0505% ( 3) 00:11:08.274 606.918 - 610.488: 0.0519% ( 1) 00:11:08.274 610.488 - 614.058: 0.0533% ( 1) 00:11:08.274 614.058 - 617.628: 0.0561% ( 2) 00:11:08.274 617.628 - 621.198: 0.0589% ( 2) 00:11:08.274 621.198 - 624.768: 0.0603% ( 1) 00:11:08.274 624.768 - 628.338: 0.0632% ( 2) 00:11:08.274 628.338 - 631.908: 0.0646% ( 1) 00:11:08.274 631.908 - 635.479: 0.0674% ( 2) 00:11:08.274 635.479 - 639.049: 0.0688% ( 1) 00:11:08.274 671.180 - 674.750: 0.0716% ( 2) 00:11:08.274 714.021 - 717.591: 0.0730% ( 1) 00:11:08.274 728.301 - 731.871: 0.0744% ( 1) 00:11:08.274 731.871 - 735.441: 0.0758% ( 1) 00:11:08.274 739.012 - 742.582: 0.0786% ( 2) 00:11:08.274 742.582 - 746.152: 0.0828% ( 3) 00:11:08.274 756.862 - 760.432: 0.0856% ( 2) 00:11:08.274 760.432 - 764.002: 0.0940% ( 6) 00:11:08.274 764.002 - 767.572: 0.1025% ( 6) 00:11:08.274 767.572 - 771.143: 0.1053% ( 2) 00:11:08.274 771.143 - 774.713: 0.1109% ( 4) 00:11:08.274 774.713 - 778.283: 0.1165% ( 4) 00:11:08.274 778.283 - 781.853: 0.1235% ( 5) 00:11:08.274 781.853 - 785.423: 0.1319% ( 6) 00:11:08.274 785.423 - 788.993: 0.1432% ( 8) 00:11:08.274 788.993 - 792.563: 0.1600% ( 12) 00:11:08.274 792.563 - 796.133: 0.1839% ( 17) 00:11:08.274 796.133 - 799.703: 0.1979% ( 10) 00:11:08.274 799.703 - 803.273: 0.2288% ( 22) 00:11:08.274 803.273 - 806.844: 0.2400% ( 8) 00:11:08.274 806.844 - 810.414: 0.2526% ( 9) 00:11:08.274 810.414 - 813.984: 0.2709% ( 13) 00:11:08.274 813.984 - 817.554: 0.3031% ( 23) 00:11:08.274 817.554 - 821.124: 0.3214% ( 13) 00:11:08.274 821.124 - 824.694: 0.3340% ( 9) 00:11:08.274 824.694 - 828.264: 0.3523% ( 13) 00:11:08.274 828.264 - 831.834: 0.3775% ( 18) 00:11:08.274 831.834 - 835.404: 0.4295% ( 37) 00:11:08.274 835.404 - 838.974: 0.4926% ( 45) 00:11:08.274 838.974 - 842.545: 0.5291% ( 26) 00:11:08.274 842.545 - 846.115: 0.5600% ( 22) 00:11:08.274 846.115 - 849.685: 0.5895% ( 21) 00:11:08.274 849.685 - 853.255: 0.6259% ( 26) 00:11:08.274 853.255 - 856.825: 0.6681% ( 30) 00:11:08.274 856.825 - 860.395: 0.6989% ( 22) 00:11:08.274 860.395 - 863.965: 0.7396% ( 29) 00:11:08.274 863.965 - 867.535: 0.7873% ( 34) 00:11:08.274 867.535 - 871.105: 0.8477% ( 43) 00:11:08.274 871.105 - 874.676: 0.9165% ( 49) 00:11:08.274 874.676 - 878.246: 0.9824% ( 47) 00:11:08.274 878.246 - 881.816: 1.0400% ( 41) 00:11:08.274 881.816 - 885.386: 1.1158% ( 54) 00:11:08.274 885.386 - 888.956: 1.1733% ( 41) 00:11:08.274 888.956 - 892.526: 1.2505% ( 55) 00:11:08.274 892.526 - 896.096: 1.3193% ( 49) 00:11:08.274 896.096 - 899.666: 1.3993% ( 57) 00:11:08.274 899.666 - 903.236: 1.5129% ( 81) 00:11:08.274 903.236 - 906.806: 1.6477% ( 96) 00:11:08.274 906.806 - 910.377: 1.7585% ( 79) 00:11:08.274 910.377 - 913.947: 1.8540% ( 68) 00:11:08.274 913.947 - 921.087: 2.1305% ( 197) 00:11:08.274 921.087 - 928.227: 2.3803% ( 178) 00:11:08.274 928.227 - 935.367: 2.6315% ( 179) 00:11:08.274 935.367 - 942.508: 2.9122% ( 200) 00:11:08.274 942.508 - 949.648: 3.2659% ( 252) 00:11:08.274 949.648 - 956.788: 3.5564% ( 207) 00:11:08.274 956.788 - 963.928: 3.8680% ( 222) 00:11:08.274 963.928 - 971.068: 4.1501% ( 201) 00:11:08.274 971.068 - 978.209: 4.4883% ( 241) 00:11:08.274 978.209 - 985.349: 4.7999% ( 222) 00:11:08.274 985.349 - 992.489: 5.1577% ( 255) 00:11:08.274 992.489 - 999.629: 5.4623% ( 217) 00:11:08.274 999.629 - 1006.769: 5.7865% ( 231) 00:11:08.274 1006.769 - 1013.910: 6.1065% ( 228) 00:11:08.274 1013.910 - 1021.050: 6.4560% ( 249) 00:11:08.274 1021.050 - 1028.190: 6.7549% ( 213) 00:11:08.274 1028.190 - 1035.330: 7.0791% ( 231) 00:11:08.274 1035.330 - 1042.470: 7.4061% ( 233) 00:11:08.274 1042.470 - 1049.611: 7.7556% ( 249) 00:11:08.274 1049.611 - 1056.751: 8.0657% ( 221) 00:11:08.274 1056.751 - 1063.891: 8.4770% ( 293) 00:11:08.274 1063.891 - 1071.031: 8.8896% ( 294) 00:11:08.274 1071.031 - 1078.171: 9.3218% ( 308) 00:11:08.274 1078.171 - 1085.312: 9.7415% ( 299) 00:11:08.274 1085.312 - 1092.452: 10.2285% ( 347) 00:11:08.274 1092.452 - 1099.592: 10.5765% ( 248) 00:11:08.274 1099.592 - 1106.732: 11.0355% ( 327) 00:11:08.274 1106.732 - 1113.873: 11.5267% ( 350) 00:11:08.274 1113.873 - 1121.013: 11.9505% ( 302) 00:11:08.274 1121.013 - 1128.153: 12.4881% ( 383) 00:11:08.275 1128.153 - 1135.293: 12.9919% ( 359) 00:11:08.275 1135.293 - 1142.433: 13.5323% ( 385) 00:11:08.275 1142.433 - 1149.574: 14.1203% ( 419) 00:11:08.275 1149.574 - 1156.714: 14.7126% ( 422) 00:11:08.275 1156.714 - 1163.854: 15.2333% ( 371) 00:11:08.275 1163.854 - 1170.994: 15.8508% ( 440) 00:11:08.275 1170.994 - 1178.134: 16.4402% ( 420) 00:11:08.275 1178.134 - 1185.275: 17.1279% ( 490) 00:11:08.275 1185.275 - 1192.415: 17.8128% ( 488) 00:11:08.275 1192.415 - 1199.555: 18.3897% ( 411) 00:11:08.275 1199.555 - 1206.695: 19.0844% ( 495) 00:11:08.275 1206.695 - 1213.835: 19.6738% ( 420) 00:11:08.275 1213.835 - 1220.976: 20.4429% ( 548) 00:11:08.275 1220.976 - 1228.116: 21.1868% ( 530) 00:11:08.275 1228.116 - 1235.256: 21.9194% ( 522) 00:11:08.275 1235.256 - 1242.396: 22.6366% ( 511) 00:11:08.275 1242.396 - 1249.536: 23.2976% ( 471) 00:11:08.275 1249.536 - 1256.677: 24.0737% ( 553) 00:11:08.275 1256.677 - 1263.817: 24.8849% ( 578) 00:11:08.275 1263.817 - 1270.957: 25.5375% ( 465) 00:11:08.275 1270.957 - 1278.097: 26.1340% ( 425) 00:11:08.275 1278.097 - 1285.238: 26.7950% ( 471) 00:11:08.275 1285.238 - 1292.378: 27.5010% ( 503) 00:11:08.275 1292.378 - 1299.518: 28.1648% ( 473) 00:11:08.275 1299.518 - 1306.658: 28.8806% ( 510) 00:11:08.275 1306.658 - 1313.798: 29.5248% ( 459) 00:11:08.275 1313.798 - 1320.939: 30.2279% ( 501) 00:11:08.275 1320.939 - 1328.079: 30.9521% ( 516) 00:11:08.275 1328.079 - 1335.219: 31.7367% ( 559) 00:11:08.275 1335.219 - 1342.359: 32.5436% ( 575) 00:11:08.275 1342.359 - 1349.499: 33.0096% ( 332) 00:11:08.275 1349.499 - 1356.640: 33.4882% ( 341) 00:11:08.275 1356.640 - 1363.780: 34.0411% ( 394) 00:11:08.275 1363.780 - 1370.920: 34.6713% ( 449) 00:11:08.275 1370.920 - 1378.060: 35.3183% ( 461) 00:11:08.275 1378.060 - 1385.200: 35.9204% ( 429) 00:11:08.275 1385.200 - 1392.341: 36.7134% ( 565) 00:11:08.275 1392.341 - 1399.481: 37.3926% ( 484) 00:11:08.275 1399.481 - 1406.621: 38.0888% ( 496) 00:11:08.275 1406.621 - 1413.761: 38.7694% ( 485) 00:11:08.275 1413.761 - 1420.901: 39.3617% ( 422) 00:11:08.275 1420.901 - 1428.042: 40.0564% ( 495) 00:11:08.275 1428.042 - 1435.182: 40.6739% ( 440) 00:11:08.275 1435.182 - 1442.322: 41.2115% ( 383) 00:11:08.275 1442.322 - 1449.462: 41.6297% ( 298) 00:11:08.275 1449.462 - 1456.603: 42.0774% ( 319) 00:11:08.275 1456.603 - 1463.743: 42.6388% ( 400) 00:11:08.275 1463.743 - 1470.883: 43.0963% ( 326) 00:11:08.275 1470.883 - 1478.023: 43.6367% ( 385) 00:11:08.275 1478.023 - 1485.163: 44.1630% ( 375) 00:11:08.275 1485.163 - 1492.304: 44.6598% ( 354) 00:11:08.275 1492.304 - 1499.444: 45.1594% ( 356) 00:11:08.275 1499.444 - 1506.584: 45.6071% ( 319) 00:11:08.275 1506.584 - 1513.724: 46.1405% ( 380) 00:11:08.275 1513.724 - 1520.864: 46.7510% ( 435) 00:11:08.275 1520.864 - 1528.005: 47.1720% ( 300) 00:11:08.275 1528.005 - 1535.145: 47.5495% ( 269) 00:11:08.275 1535.145 - 1542.285: 48.0169% ( 333) 00:11:08.275 1542.285 - 1549.425: 48.4421% ( 303) 00:11:08.275 1549.425 - 1556.565: 49.0190% ( 411) 00:11:08.275 1556.565 - 1563.706: 49.4990% ( 342) 00:11:08.275 1563.706 - 1570.846: 50.0491% ( 392) 00:11:08.275 1570.846 - 1577.986: 50.6723% ( 444) 00:11:08.275 1577.986 - 1585.126: 51.2589% ( 418) 00:11:08.275 1585.126 - 1592.266: 51.8329% ( 409) 00:11:08.275 1592.266 - 1599.407: 52.3578% ( 374) 00:11:08.275 1599.407 - 1606.547: 52.9290% ( 407) 00:11:08.275 1606.547 - 1613.687: 53.4203% ( 350) 00:11:08.275 1613.687 - 1620.827: 53.9101% ( 349) 00:11:08.275 1620.827 - 1627.968: 54.3578% ( 319) 00:11:08.275 1627.968 - 1635.108: 54.7788% ( 300) 00:11:08.275 1635.108 - 1642.248: 55.2041% ( 303) 00:11:08.275 1642.248 - 1649.388: 55.5956% ( 279) 00:11:08.275 1649.388 - 1656.528: 56.0111% ( 296) 00:11:08.275 1656.528 - 1663.669: 56.4110% ( 285) 00:11:08.275 1663.669 - 1670.809: 56.7535% ( 244) 00:11:08.275 1670.809 - 1677.949: 57.1423% ( 277) 00:11:08.275 1677.949 - 1685.089: 57.4552% ( 223) 00:11:08.275 1685.089 - 1692.229: 57.7149% ( 185) 00:11:08.275 1692.229 - 1699.370: 58.0208% ( 218) 00:11:08.275 1699.370 - 1706.510: 58.3717% ( 250) 00:11:08.275 1706.510 - 1713.650: 58.6875% ( 225) 00:11:08.275 1713.650 - 1720.790: 58.9962% ( 220) 00:11:08.275 1720.790 - 1727.930: 59.3008% ( 217) 00:11:08.275 1727.930 - 1735.071: 59.5464% ( 175) 00:11:08.275 1735.071 - 1742.211: 59.7696% ( 159) 00:11:08.275 1742.211 - 1749.351: 60.1836% ( 295) 00:11:08.275 1749.351 - 1756.491: 60.4699% ( 204) 00:11:08.275 1756.491 - 1763.631: 60.7492% ( 199) 00:11:08.275 1763.631 - 1770.772: 61.0481% ( 213) 00:11:08.275 1770.772 - 1777.912: 61.3625% ( 224) 00:11:08.275 1777.912 - 1785.052: 61.6755% ( 223) 00:11:08.275 1785.052 - 1792.192: 61.9660% ( 207) 00:11:08.275 1792.192 - 1799.333: 62.2986% ( 237) 00:11:08.275 1799.333 - 1806.473: 62.6130% ( 224) 00:11:08.275 1806.473 - 1813.613: 62.9330% ( 228) 00:11:08.275 1813.613 - 1820.753: 63.2600% ( 233) 00:11:08.275 1820.753 - 1827.893: 63.6024% ( 244) 00:11:08.275 1827.893 - 1842.174: 64.1357% ( 380) 00:11:08.275 1842.174 - 1856.454: 64.5975% ( 329) 00:11:08.275 1856.454 - 1870.735: 65.1462% ( 391) 00:11:08.275 1870.735 - 1885.015: 65.7610% ( 438) 00:11:08.275 1885.015 - 1899.295: 66.4066% ( 460) 00:11:08.275 1899.295 - 1913.576: 67.0732% ( 475) 00:11:08.275 1913.576 - 1927.856: 67.5448% ( 336) 00:11:08.275 1927.856 - 1942.137: 68.0809% ( 382) 00:11:08.275 1942.137 - 1956.417: 68.5637% ( 344) 00:11:08.275 1956.417 - 1970.698: 69.0072% ( 316) 00:11:08.275 1970.698 - 1984.978: 69.4030% ( 282) 00:11:08.275 1984.978 - 1999.258: 69.8170% ( 295) 00:11:08.275 1999.258 - 2013.539: 70.3012% ( 345) 00:11:08.275 2013.539 - 2027.819: 70.6759% ( 267) 00:11:08.275 2027.819 - 2042.100: 70.9748% ( 213) 00:11:08.275 2042.100 - 2056.380: 71.3033% ( 234) 00:11:08.275 2056.380 - 2070.660: 71.6261% ( 230) 00:11:08.275 2070.660 - 2084.941: 72.0303% ( 288) 00:11:08.275 2084.941 - 2099.221: 72.3994% ( 263) 00:11:08.275 2099.221 - 2113.502: 72.7727% ( 266) 00:11:08.275 2113.502 - 2127.782: 73.0674% ( 210) 00:11:08.275 2127.782 - 2142.063: 73.3790% ( 222) 00:11:08.275 2142.063 - 2156.343: 73.6414% ( 187) 00:11:08.275 2156.343 - 2170.623: 74.0428% ( 286) 00:11:08.275 2170.623 - 2184.904: 74.4176% ( 267) 00:11:08.275 2184.904 - 2199.184: 74.7797% ( 258) 00:11:08.275 2199.184 - 2213.465: 75.1053% ( 232) 00:11:08.275 2213.465 - 2227.745: 75.5052% ( 285) 00:11:08.275 2227.745 - 2242.025: 75.9347% ( 306) 00:11:08.275 2242.025 - 2256.306: 76.3010% ( 261) 00:11:08.275 2256.306 - 2270.586: 76.6028% ( 215) 00:11:08.275 2270.586 - 2284.867: 76.9171% ( 224) 00:11:08.275 2284.867 - 2299.147: 77.2147% ( 212) 00:11:08.275 2299.147 - 2313.428: 77.5094% ( 210) 00:11:08.275 2313.428 - 2327.708: 77.8448% ( 239) 00:11:08.275 2327.708 - 2341.988: 78.2308% ( 275) 00:11:08.275 2341.988 - 2356.269: 78.5817% ( 250) 00:11:08.275 2356.269 - 2370.549: 78.9760% ( 281) 00:11:08.275 2370.549 - 2384.830: 79.3494% ( 266) 00:11:08.275 2384.830 - 2399.110: 79.7690% ( 299) 00:11:08.275 2399.110 - 2413.390: 80.1465% ( 269) 00:11:08.275 2413.390 - 2427.671: 80.5648% ( 298) 00:11:08.275 2427.671 - 2441.951: 81.1051% ( 385) 00:11:08.275 2441.951 - 2456.232: 81.5331% ( 305) 00:11:08.275 2456.232 - 2470.512: 81.9584% ( 303) 00:11:08.275 2470.512 - 2484.793: 82.3317% ( 266) 00:11:08.275 2484.793 - 2499.073: 82.6938% ( 258) 00:11:08.275 2499.073 - 2513.353: 83.0910% ( 283) 00:11:08.275 2513.353 - 2527.634: 83.4966% ( 289) 00:11:08.276 2527.634 - 2541.914: 83.8755% ( 270) 00:11:08.276 2541.914 - 2556.195: 84.2236% ( 248) 00:11:08.276 2556.195 - 2570.475: 84.5773% ( 252) 00:11:08.276 2570.475 - 2584.755: 84.9295% ( 251) 00:11:08.276 2584.755 - 2599.036: 85.2706% ( 243) 00:11:08.276 2599.036 - 2613.316: 85.5541% ( 202) 00:11:08.276 2613.316 - 2627.597: 85.8713% ( 226) 00:11:08.276 2627.597 - 2641.877: 86.2292% ( 255) 00:11:08.276 2641.877 - 2656.158: 86.5856% ( 254) 00:11:08.276 2656.158 - 2670.438: 86.9141% ( 234) 00:11:08.276 2670.438 - 2684.718: 87.2607% ( 247) 00:11:08.276 2684.718 - 2698.999: 87.6116% ( 250) 00:11:08.276 2698.999 - 2713.279: 87.9723% ( 257) 00:11:08.276 2713.279 - 2727.560: 88.3400% ( 262) 00:11:08.276 2727.560 - 2741.840: 88.6866% ( 247) 00:11:08.276 2741.840 - 2756.120: 89.0263% ( 242) 00:11:08.276 2756.120 - 2770.401: 89.3743% ( 248) 00:11:08.276 2770.401 - 2784.681: 89.6620% ( 205) 00:11:08.276 2784.681 - 2798.962: 89.9343% ( 194) 00:11:08.276 2798.962 - 2813.242: 90.2052% ( 193) 00:11:08.276 2813.242 - 2827.523: 90.4985% ( 209) 00:11:08.276 2827.523 - 2841.803: 90.7848% ( 204) 00:11:08.276 2841.803 - 2856.083: 91.0613% ( 197) 00:11:08.276 2856.083 - 2870.364: 91.3616% ( 214) 00:11:08.276 2870.364 - 2884.644: 91.6634% ( 215) 00:11:08.276 2884.644 - 2898.925: 91.9244% ( 186) 00:11:08.276 2898.925 - 2913.205: 92.2220% ( 212) 00:11:08.276 2913.205 - 2927.485: 92.4914% ( 192) 00:11:08.276 2927.485 - 2941.766: 92.7384% ( 176) 00:11:08.276 2941.766 - 2956.046: 92.9728% ( 167) 00:11:08.276 2956.046 - 2970.327: 93.2086% ( 168) 00:11:08.276 2970.327 - 2984.607: 93.4304% ( 158) 00:11:08.276 2984.607 - 2998.888: 93.6591% ( 163) 00:11:08.276 2998.888 - 3013.168: 93.8261% ( 119) 00:11:08.276 3013.168 - 3027.448: 93.9889% ( 116) 00:11:08.276 3027.448 - 3041.729: 94.1349% ( 104) 00:11:08.276 3041.729 - 3056.009: 94.2865% ( 108) 00:11:08.276 3056.009 - 3070.290: 94.4324% ( 104) 00:11:08.276 3070.290 - 3084.570: 94.5868% ( 110) 00:11:08.276 3084.570 - 3098.850: 94.7089% ( 87) 00:11:08.276 3098.850 - 3113.131: 94.8226% ( 81) 00:11:08.276 3113.131 - 3127.411: 94.9419% ( 85) 00:11:08.276 3127.411 - 3141.692: 95.0514% ( 78) 00:11:08.276 3141.692 - 3155.972: 95.1580% ( 76) 00:11:08.276 3155.972 - 3170.253: 95.2717% ( 81) 00:11:08.276 3170.253 - 3184.533: 95.3854% ( 81) 00:11:08.276 3184.533 - 3198.813: 95.4963% ( 79) 00:11:08.276 3198.813 - 3213.094: 95.5973% ( 72) 00:11:08.276 3213.094 - 3227.374: 95.6970% ( 71) 00:11:08.276 3227.374 - 3241.655: 95.7840% ( 62) 00:11:08.276 3241.655 - 3255.935: 95.8864% ( 73) 00:11:08.276 3255.935 - 3270.215: 95.9791% ( 66) 00:11:08.276 3270.215 - 3284.496: 96.0394% ( 43) 00:11:08.276 3284.496 - 3298.776: 96.1166% ( 55) 00:11:08.276 3298.776 - 3313.057: 96.1868% ( 50) 00:11:08.276 3313.057 - 3327.337: 96.2626% ( 54) 00:11:08.276 3327.337 - 3341.618: 96.3369% ( 53) 00:11:08.276 3341.618 - 3355.898: 96.3987% ( 44) 00:11:08.276 3355.898 - 3370.178: 96.4647% ( 47) 00:11:08.276 3370.178 - 3384.459: 96.5320% ( 48) 00:11:08.276 3384.459 - 3398.739: 96.6022% ( 50) 00:11:08.276 3398.739 - 3413.020: 96.6724% ( 50) 00:11:08.276 3413.020 - 3427.300: 96.7622% ( 64) 00:11:08.276 3427.300 - 3441.580: 96.8520% ( 64) 00:11:08.276 3441.580 - 3455.861: 96.9376% ( 61) 00:11:08.276 3455.861 - 3470.141: 97.0289% ( 65) 00:11:08.276 3470.141 - 3484.422: 97.1032% ( 53) 00:11:08.276 3484.422 - 3498.702: 97.1692% ( 47) 00:11:08.276 3498.702 - 3512.983: 97.2604% ( 65) 00:11:08.276 3512.983 - 3527.263: 97.3559% ( 68) 00:11:08.276 3527.263 - 3541.543: 97.4274% ( 51) 00:11:08.276 3541.543 - 3555.824: 97.5046% ( 55) 00:11:08.276 3555.824 - 3570.104: 97.5692% ( 46) 00:11:08.276 3570.104 - 3584.385: 97.6408% ( 51) 00:11:08.276 3584.385 - 3598.665: 97.7095% ( 49) 00:11:08.276 3598.665 - 3612.945: 97.7994% ( 64) 00:11:08.276 3612.945 - 3627.226: 97.8962% ( 69) 00:11:08.276 3627.226 - 3641.506: 97.9804% ( 60) 00:11:08.276 3641.506 - 3655.787: 98.0744% ( 67) 00:11:08.276 3655.787 - 3684.348: 98.2386% ( 117) 00:11:08.276 3684.348 - 3712.908: 98.3776% ( 99) 00:11:08.276 3712.908 - 3741.469: 98.4590% ( 58) 00:11:08.276 3741.469 - 3770.030: 98.5292% ( 50) 00:11:08.276 3770.030 - 3798.591: 98.5853% ( 40) 00:11:08.276 3798.591 - 3827.152: 98.6330% ( 34) 00:11:08.276 3827.152 - 3855.713: 98.6864% ( 38) 00:11:08.276 3855.713 - 3884.273: 98.7327% ( 33) 00:11:08.276 3884.273 - 3912.834: 98.7649% ( 23) 00:11:08.276 3912.834 - 3941.395: 98.7986% ( 24) 00:11:08.276 3941.395 - 3969.956: 98.8127% ( 10) 00:11:08.276 3969.956 - 3998.517: 98.8253% ( 9) 00:11:08.276 3998.517 - 4027.078: 98.8393% ( 10) 00:11:08.276 4027.078 - 4055.638: 98.8506% ( 8) 00:11:08.276 4055.638 - 4084.199: 98.8604% ( 7) 00:11:08.276 4084.199 - 4112.760: 98.8660% ( 4) 00:11:08.276 4112.760 - 4141.321: 98.8730% ( 5) 00:11:08.276 4141.321 - 4169.882: 98.8885% ( 11) 00:11:08.276 4169.882 - 4198.443: 98.8955% ( 5) 00:11:08.276 4198.443 - 4227.003: 98.9151% ( 14) 00:11:08.276 4227.003 - 4255.564: 98.9642% ( 35) 00:11:08.276 4255.564 - 4284.125: 99.0442% ( 57) 00:11:08.276 4284.125 - 4312.686: 99.1004% ( 40) 00:11:08.276 4312.686 - 4341.247: 99.1705% ( 50) 00:11:08.276 4341.247 - 4369.808: 99.2337% ( 45) 00:11:08.276 4369.808 - 4398.368: 99.2688% ( 25) 00:11:08.276 4398.368 - 4426.929: 99.2758% ( 5) 00:11:08.276 4426.929 - 4455.490: 99.2842% ( 6) 00:11:08.276 4455.490 - 4484.051: 99.2856% ( 1) 00:11:08.276 4569.733 - 4598.294: 99.2955% ( 7) 00:11:08.276 4598.294 - 4626.855: 99.3151% ( 14) 00:11:08.276 4626.855 - 4655.416: 99.3319% ( 12) 00:11:08.276 4655.416 - 4683.977: 99.3446% ( 9) 00:11:08.276 4683.977 - 4712.538: 99.3544% ( 7) 00:11:08.276 4712.538 - 4741.098: 99.3783% ( 17) 00:11:08.276 4741.098 - 4769.659: 99.3811% ( 2) 00:11:08.276 4769.659 - 4798.220: 99.3853% ( 3) 00:11:08.276 4855.342 - 4883.903: 99.3923% ( 5) 00:11:08.276 4912.463 - 4941.024: 99.3937% ( 1) 00:11:08.276 4969.585 - 4998.146: 99.4007% ( 5) 00:11:08.276 4998.146 - 5026.707: 99.4021% ( 1) 00:11:08.276 5026.707 - 5055.268: 99.4077% ( 4) 00:11:08.276 5055.268 - 5083.828: 99.4119% ( 3) 00:11:08.276 5083.828 - 5112.389: 99.4274% ( 11) 00:11:08.276 5112.389 - 5140.950: 99.4414% ( 10) 00:11:08.276 5140.950 - 5169.511: 99.4526% ( 8) 00:11:08.276 5169.511 - 5198.072: 99.4695% ( 12) 00:11:08.276 5198.072 - 5226.633: 99.4793% ( 7) 00:11:08.276 5226.633 - 5255.193: 99.4933% ( 10) 00:11:08.276 5255.193 - 5283.754: 99.4990% ( 4) 00:11:08.276 5283.754 - 5312.315: 99.5144% ( 11) 00:11:08.276 5312.315 - 5340.876: 99.5355% ( 15) 00:11:08.276 5340.876 - 5369.437: 99.5495% ( 10) 00:11:08.276 5369.437 - 5397.998: 99.5635% ( 10) 00:11:08.276 5397.998 - 5426.558: 99.5874% ( 17) 00:11:08.276 5426.558 - 5455.119: 99.6140% ( 19) 00:11:08.276 5455.119 - 5483.680: 99.6407% ( 19) 00:11:08.276 5483.680 - 5512.241: 99.6674% ( 19) 00:11:08.276 5512.241 - 5540.802: 99.6870% ( 14) 00:11:08.276 5540.802 - 5569.363: 99.6997% ( 9) 00:11:08.276 5569.363 - 5597.923: 99.7109% ( 8) 00:11:08.276 5597.923 - 5626.484: 99.7235% ( 9) 00:11:08.276 5626.484 - 5655.045: 99.7263% ( 2) 00:11:08.276 5655.045 - 5683.606: 99.7361% ( 7) 00:11:08.276 5683.606 - 5712.167: 99.7474% ( 8) 00:11:08.276 5712.167 - 5740.728: 99.7572% ( 7) 00:11:09.210 5740.728 - 5769.288: 99.7684% ( 8) 00:11:09.210 5769.288 - 5797.849: 99.7797% ( 8) 00:11:09.210 5797.849 - 5826.410: 99.7909% ( 8) 00:11:09.210 5826.410 - 5854.971: 99.8021% ( 8) 00:11:09.210 5854.971 - 5883.532: 99.8119% ( 7) 00:11:09.210 5883.532 - 5912.093: 99.8232% ( 8) 00:11:09.210 5912.093 - 5940.653: 99.8246% ( 1) 00:11:09.210 6112.018 - 6140.579: 99.8260% ( 1) 00:11:09.210 6169.140 - 6197.701: 99.8274% ( 1) 00:11:09.210 6197.701 - 6226.262: 99.8288% ( 1) 00:11:09.210 6226.262 - 6254.823: 99.8316% ( 2) 00:11:09.210 6311.944 - 6340.505: 99.8442% ( 9) 00:11:09.210 6397.627 - 6426.188: 99.8456% ( 1) 00:11:09.210 6483.309 - 6511.870: 99.8470% ( 1) 00:11:09.210 6540.431 - 6568.992: 99.8484% ( 1) 00:11:09.210 6597.553 - 6626.113: 99.8512% ( 2) 00:11:09.210 6626.113 - 6654.674: 99.8540% ( 2) 00:11:09.210 6683.235 - 6711.796: 99.8568% ( 2) 00:11:09.210 6711.796 - 6740.357: 99.8597% ( 2) 00:11:09.210 6797.478 - 6826.039: 99.8639% ( 3) 00:11:09.210 6968.843 - 6997.404: 99.8681% ( 3) 00:11:09.210 7083.087 - 7111.648: 99.8709% ( 2) 00:11:09.210 7111.648 - 7140.208: 99.8723% ( 1) 00:11:09.210 7140.208 - 7168.769: 99.8793% ( 5) 00:11:09.210 7168.769 - 7197.330: 99.8835% ( 3) 00:11:09.210 7254.452 - 7283.013: 99.8849% ( 1) 00:11:09.210 7283.013 - 7311.573: 99.8919% ( 5) 00:11:09.210 7311.573 - 7368.695: 99.8975% ( 4) 00:11:09.210 7425.817 - 7482.938: 99.9004% ( 2) 00:11:09.210 7482.938 - 7540.060: 99.9018% ( 1) 00:11:09.210 7597.182 - 7654.303: 99.9032% ( 1) 00:11:09.210 7654.303 - 7711.425: 99.9088% ( 4) 00:11:09.210 7711.425 - 7768.547: 99.9116% ( 2) 00:11:09.210 7768.547 - 7825.668: 99.9158% ( 3) 00:11:09.210 7882.790 - 7939.912: 99.9172% ( 1) 00:11:09.210 7939.912 - 7997.033: 99.9186% ( 1) 00:11:09.210 7997.033 - 8054.155: 99.9228% ( 3) 00:11:09.210 8225.520 - 8282.642: 99.9242% ( 1) 00:11:09.210 8282.642 - 8339.763: 99.9270% ( 2) 00:11:09.210 8339.763 - 8396.885: 99.9284% ( 1) 00:11:09.210 8796.737 - 8853.858: 99.9298% ( 1) 00:11:09.210 8853.858 - 8910.980: 99.9312% ( 1) 00:11:09.210 8910.980 - 8968.102: 99.9326% ( 1) 00:11:09.210 9082.345 - 9139.467: 99.9495% ( 12) 00:11:09.210 9139.467 - 9196.588: 99.9523% ( 2) 00:11:09.210 9310.832 - 9367.953: 99.9537% ( 1) 00:11:09.210 9425.075 - 9482.197: 99.9551% ( 1) 00:11:09.210 9539.318 - 9596.440: 99.9747% ( 14) 00:11:09.210 9824.927 - 9882.048: 99.9775% ( 2) 00:11:09.210 9882.048 - 9939.170: 99.9832% ( 4) 00:11:09.210 9996.292 - 10053.413: 99.9902% ( 5) 00:11:09.210 10053.413 - 10110.535: 99.9916% ( 1) 00:11:09.210 10339.022 - 10396.143: 99.9930% ( 1) 00:11:09.210 10510.387 - 10567.508: 99.9958% ( 2) 00:11:09.210 10567.508 - 10624.630: 99.9972% ( 1) 00:11:09.210 12795.254 - 12852.375: 100.0000% ( 2) 00:11:09.210 00:11:09.210 13:34:48 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:09.210 00:11:09.210 real 0m3.882s 00:11:09.210 user 0m2.890s 00:11:09.210 sys 0m0.989s 00:11:09.210 13:34:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.210 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:09.210 ************************************ 00:11:09.210 END TEST nvme_perf 00:11:09.210 ************************************ 00:11:09.210 13:34:48 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:09.210 13:34:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:09.210 13:34:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.210 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:09.210 ************************************ 00:11:09.210 START TEST nvme_hello_world 00:11:09.210 ************************************ 00:11:09.210 13:34:48 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:09.468 EAL: TSC is not safe to use in SMP mode 00:11:09.468 EAL: TSC is not invariant 00:11:09.468 [2024-07-10 13:34:48.809476] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:09.726 Initializing NVMe Controllers 00:11:09.726 Attaching to 0000:00:06.0 00:11:09.726 Attached to 0000:00:06.0 00:11:09.726 Namespace ID: 1 size: 5GB 00:11:09.726 Initialization complete. 00:11:09.726 INFO: using host memory buffer for IO 00:11:09.726 Hello world! 00:11:09.726 00:11:09.726 real 0m0.490s 00:11:09.726 user 0m0.025s 00:11:09.726 sys 0m0.465s 00:11:09.726 13:34:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.726 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:09.726 ************************************ 00:11:09.726 END TEST nvme_hello_world 00:11:09.726 ************************************ 00:11:09.726 13:34:48 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:09.726 13:34:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:09.726 13:34:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.726 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:09.726 ************************************ 00:11:09.726 START TEST nvme_sgl 00:11:09.726 ************************************ 00:11:09.726 13:34:48 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:09.984 EAL: TSC is not safe to use in SMP mode 00:11:09.984 EAL: TSC is not invariant 00:11:09.984 [2024-07-10 13:34:49.337982] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:09.984 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:09.984 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:09.984 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:09.984 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:09.984 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:09.984 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:10.242 NVMe Readv/Writev Request test 00:11:10.242 Attaching to 0000:00:06.0 00:11:10.242 Attached to 0000:00:06.0 00:11:10.242 0000:00:06.0: build_io_request_2 test passed 00:11:10.242 0000:00:06.0: build_io_request_4 test passed 00:11:10.242 0000:00:06.0: build_io_request_5 test passed 00:11:10.242 0000:00:06.0: build_io_request_6 test passed 00:11:10.242 0000:00:06.0: build_io_request_7 test passed 00:11:10.242 0000:00:06.0: build_io_request_10 test passed 00:11:10.242 Cleaning up... 00:11:10.242 00:11:10.242 real 0m0.486s 00:11:10.242 user 0m0.008s 00:11:10.242 sys 0m0.478s 00:11:10.242 13:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.242 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:11:10.242 ************************************ 00:11:10.242 END TEST nvme_sgl 00:11:10.242 ************************************ 00:11:10.242 13:34:49 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:10.242 13:34:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:10.242 13:34:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.242 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:11:10.242 ************************************ 00:11:10.242 START TEST nvme_e2edp 00:11:10.242 ************************************ 00:11:10.242 13:34:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:10.808 EAL: TSC is not safe to use in SMP mode 00:11:10.808 EAL: TSC is not invariant 00:11:10.808 [2024-07-10 13:34:49.935261] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:10.808 NVMe Write/Read with End-to-End data protection test 00:11:10.808 Attaching to 0000:00:06.0 00:11:10.808 Attached to 0000:00:06.0 00:11:10.808 Cleaning up... 00:11:10.808 00:11:10.808 real 0m0.545s 00:11:10.808 user 0m0.026s 00:11:10.808 sys 0m0.519s 00:11:10.808 13:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.808 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 ************************************ 00:11:10.808 END TEST nvme_e2edp 00:11:10.808 ************************************ 00:11:10.808 13:34:50 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:10.808 13:34:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:10.808 13:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.808 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 ************************************ 00:11:10.808 START TEST nvme_reserve 00:11:10.808 ************************************ 00:11:10.808 13:34:50 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:11.376 EAL: TSC is not safe to use in SMP mode 00:11:11.376 EAL: TSC is not invariant 00:11:11.376 [2024-07-10 13:34:50.477083] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:11.376 ===================================================== 00:11:11.376 NVMe Controller at PCI bus 0, device 6, function 0 00:11:11.376 ===================================================== 00:11:11.376 Reservations: Not Supported 00:11:11.376 Reservation test passed 00:11:11.376 00:11:11.376 real 0m0.484s 00:11:11.376 user 0m0.018s 00:11:11.376 sys 0m0.465s 00:11:11.376 13:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.376 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:11:11.376 ************************************ 00:11:11.376 END TEST nvme_reserve 00:11:11.376 ************************************ 00:11:11.376 13:34:50 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:11.376 13:34:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:11.376 13:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.376 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:11:11.376 ************************************ 00:11:11.376 START TEST nvme_err_injection 00:11:11.376 ************************************ 00:11:11.376 13:34:50 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:11.952 EAL: TSC is not safe to use in SMP mode 00:11:11.952 EAL: TSC is not invariant 00:11:11.952 [2024-07-10 13:34:51.014596] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:11.952 NVMe Error Injection test 00:11:11.952 Attaching to 0000:00:06.0 00:11:11.952 Attached to 0000:00:06.0 00:11:11.952 0000:00:06.0: get features failed as expected 00:11:11.952 0000:00:06.0: get features successfully as expected 00:11:11.952 0000:00:06.0: read failed as expected 00:11:11.952 0000:00:06.0: read successfully as expected 00:11:11.952 Cleaning up... 00:11:11.952 00:11:11.952 real 0m0.507s 00:11:11.952 user 0m0.026s 00:11:11.952 sys 0m0.481s 00:11:11.952 13:34:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.952 13:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.952 ************************************ 00:11:11.952 END TEST nvme_err_injection 00:11:11.952 ************************************ 00:11:11.952 13:34:51 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:11.952 13:34:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:11.952 13:34:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.952 13:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.953 ************************************ 00:11:11.953 START TEST nvme_overhead 00:11:11.953 ************************************ 00:11:11.953 13:34:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:12.213 EAL: TSC is not safe to use in SMP mode 00:11:12.213 EAL: TSC is not invariant 00:11:12.213 [2024-07-10 13:34:51.565240] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:13.596 Initializing NVMe Controllers 00:11:13.596 Attaching to 0000:00:06.0 00:11:13.596 Attached to 0000:00:06.0 00:11:13.596 Initialization complete. Launching workers. 00:11:13.596 submit (in ns) avg, min, max = 8132.3, 6176.2, 36736.5 00:11:13.596 complete (in ns) avg, min, max = 5445.2, 3721.8, 31065.0 00:11:13.596 00:11:13.596 Submit histogram 00:11:13.596 ================ 00:11:13.596 Range in us Cumulative Count 00:11:13.596 6.164 - 6.192: 0.0147% ( 2) 00:11:13.596 6.220 - 6.248: 0.0368% ( 3) 00:11:13.596 6.248 - 6.276: 0.0515% ( 2) 00:11:13.596 6.276 - 6.303: 0.0662% ( 2) 00:11:13.596 6.303 - 6.331: 0.1030% ( 5) 00:11:13.596 6.331 - 6.359: 0.1398% ( 5) 00:11:13.596 6.359 - 6.387: 0.1692% ( 4) 00:11:13.596 6.387 - 6.415: 0.2207% ( 7) 00:11:13.596 6.415 - 6.443: 0.2501% ( 4) 00:11:13.596 6.443 - 6.471: 0.3016% ( 7) 00:11:13.596 6.471 - 6.499: 0.3899% ( 12) 00:11:13.596 6.499 - 6.527: 0.4635% ( 10) 00:11:13.596 6.527 - 6.554: 0.5444% ( 11) 00:11:13.596 6.554 - 6.582: 0.6401% ( 13) 00:11:13.596 6.582 - 6.610: 0.7357% ( 13) 00:11:13.596 6.610 - 6.638: 0.7946% ( 8) 00:11:13.596 6.638 - 6.666: 0.8534% ( 8) 00:11:13.596 6.666 - 6.694: 0.9344% ( 11) 00:11:13.596 6.694 - 6.722: 1.0227% ( 12) 00:11:13.596 6.722 - 6.750: 1.1919% ( 23) 00:11:13.596 6.750 - 6.778: 1.3317% ( 19) 00:11:13.596 6.778 - 6.806: 1.6039% ( 37) 00:11:13.596 6.806 - 6.833: 1.8835% ( 38) 00:11:13.596 6.833 - 6.861: 2.2440% ( 49) 00:11:13.596 6.861 - 6.889: 2.7737% ( 72) 00:11:13.596 6.889 - 6.917: 3.3549% ( 79) 00:11:13.596 6.917 - 6.945: 4.1863% ( 113) 00:11:13.596 6.945 - 6.973: 5.0177% ( 113) 00:11:13.596 6.973 - 7.001: 5.9888% ( 132) 00:11:13.596 7.001 - 7.029: 7.0335% ( 142) 00:11:13.596 7.029 - 7.057: 8.2254% ( 162) 00:11:13.596 7.057 - 7.084: 9.7410% ( 206) 00:11:13.596 7.084 - 7.112: 11.1095% ( 186) 00:11:13.596 7.112 - 7.140: 12.7796% ( 227) 00:11:13.596 7.140 - 7.196: 15.9506% ( 431) 00:11:13.596 7.196 - 7.252: 19.5409% ( 488) 00:11:13.596 7.252 - 7.308: 23.8155% ( 581) 00:11:13.596 7.308 - 7.363: 28.0386% ( 574) 00:11:13.596 7.363 - 7.419: 32.4603% ( 601) 00:11:13.596 7.419 - 7.475: 36.6171% ( 565) 00:11:13.596 7.475 - 7.531: 40.8255% ( 572) 00:11:13.596 7.531 - 7.586: 44.7469% ( 533) 00:11:13.596 7.586 - 7.642: 48.4329% ( 501) 00:11:13.596 7.642 - 7.698: 52.2660% ( 521) 00:11:13.596 7.698 - 7.754: 55.4738% ( 436) 00:11:13.596 7.754 - 7.810: 58.3799% ( 395) 00:11:13.596 7.810 - 7.865: 61.1683% ( 379) 00:11:13.596 7.865 - 7.921: 63.6477% ( 337) 00:11:13.596 7.921 - 7.977: 65.9358% ( 311) 00:11:13.596 7.977 - 8.033: 67.9738% ( 277) 00:11:13.596 8.033 - 8.089: 69.9308% ( 266) 00:11:13.596 8.089 - 8.144: 71.7996% ( 254) 00:11:13.596 8.144 - 8.200: 73.3593% ( 212) 00:11:13.596 8.200 - 8.256: 74.5071% ( 156) 00:11:13.596 8.256 - 8.312: 75.7137% ( 164) 00:11:13.596 8.312 - 8.367: 76.5671% ( 116) 00:11:13.596 8.367 - 8.423: 77.5162% ( 129) 00:11:13.596 8.423 - 8.479: 78.3843% ( 118) 00:11:13.596 8.479 - 8.535: 79.0980% ( 97) 00:11:13.596 8.535 - 8.591: 79.8779% ( 106) 00:11:13.596 8.591 - 8.646: 80.5327% ( 89) 00:11:13.596 8.646 - 8.702: 81.2169% ( 93) 00:11:13.596 8.702 - 8.758: 81.6657% ( 61) 00:11:13.596 8.758 - 8.814: 82.1366% ( 64) 00:11:13.596 8.814 - 8.869: 82.5338% ( 54) 00:11:13.596 8.869 - 8.925: 82.8796% ( 47) 00:11:13.596 8.925 - 8.981: 83.1592% ( 38) 00:11:13.596 8.981 - 9.037: 83.3652% ( 28) 00:11:13.596 9.037 - 9.093: 83.5933% ( 31) 00:11:13.596 9.093 - 9.148: 83.8434% ( 34) 00:11:13.596 9.148 - 9.204: 84.0053% ( 22) 00:11:13.596 9.204 - 9.260: 84.1892% ( 25) 00:11:13.596 9.260 - 9.316: 84.4247% ( 32) 00:11:13.596 9.316 - 9.372: 84.8661% ( 60) 00:11:13.596 9.372 - 9.427: 85.5503% ( 93) 00:11:13.596 9.427 - 9.483: 86.4332% ( 120) 00:11:13.596 9.483 - 9.539: 87.4191% ( 134) 00:11:13.596 9.539 - 9.595: 88.2431% ( 112) 00:11:13.596 9.595 - 9.650: 88.7434% ( 68) 00:11:13.596 9.650 - 9.706: 89.3908% ( 88) 00:11:13.596 9.706 - 9.762: 89.8470% ( 62) 00:11:13.596 9.762 - 9.818: 90.4356% ( 80) 00:11:13.596 9.818 - 9.874: 91.0168% ( 79) 00:11:13.596 9.874 - 9.929: 91.5686% ( 75) 00:11:13.596 9.929 - 9.985: 92.3117% ( 101) 00:11:13.596 9.985 - 10.041: 92.9738% ( 90) 00:11:13.596 10.041 - 10.097: 93.6286% ( 89) 00:11:13.596 10.097 - 10.152: 94.2172% ( 80) 00:11:13.597 10.152 - 10.208: 94.6292% ( 56) 00:11:13.597 10.208 - 10.264: 94.9750% ( 47) 00:11:13.597 10.264 - 10.320: 95.3428% ( 50) 00:11:13.597 10.320 - 10.376: 95.6298% ( 39) 00:11:13.597 10.376 - 10.431: 96.0050% ( 51) 00:11:13.597 10.431 - 10.487: 96.3508% ( 47) 00:11:13.597 10.487 - 10.543: 96.5862% ( 32) 00:11:13.597 10.543 - 10.599: 96.7775% ( 26) 00:11:13.597 10.599 - 10.655: 96.8658% ( 12) 00:11:13.597 10.655 - 10.710: 97.0497% ( 25) 00:11:13.597 10.710 - 10.766: 97.1307% ( 11) 00:11:13.597 10.766 - 10.822: 97.2042% ( 10) 00:11:13.597 10.822 - 10.878: 97.2778% ( 10) 00:11:13.597 10.878 - 10.933: 97.3735% ( 13) 00:11:13.597 10.933 - 10.989: 97.4838% ( 15) 00:11:13.597 10.989 - 11.045: 97.5795% ( 13) 00:11:13.597 11.045 - 11.101: 97.6310% ( 7) 00:11:13.597 11.101 - 11.157: 97.6677% ( 5) 00:11:13.597 11.157 - 11.212: 97.6898% ( 3) 00:11:13.597 11.212 - 11.268: 97.7119% ( 3) 00:11:13.597 11.268 - 11.324: 97.7340% ( 3) 00:11:13.597 11.324 - 11.380: 97.7487% ( 2) 00:11:13.597 11.380 - 11.435: 97.7781% ( 4) 00:11:13.597 11.435 - 11.491: 97.7855% ( 1) 00:11:13.597 11.491 - 11.547: 97.8002% ( 2) 00:11:13.597 11.547 - 11.603: 97.8075% ( 1) 00:11:13.597 11.603 - 11.659: 97.8296% ( 3) 00:11:13.597 11.659 - 11.714: 97.8664% ( 5) 00:11:13.597 11.714 - 11.770: 97.8885% ( 3) 00:11:13.597 11.770 - 11.826: 97.9326% ( 6) 00:11:13.597 11.826 - 11.882: 97.9400% ( 1) 00:11:13.597 11.938 - 11.993: 97.9694% ( 4) 00:11:13.597 11.993 - 12.049: 98.0062% ( 5) 00:11:13.597 12.049 - 12.105: 98.0209% ( 2) 00:11:13.597 12.105 - 12.161: 98.0356% ( 2) 00:11:13.597 12.161 - 12.216: 98.0503% ( 2) 00:11:13.597 12.216 - 12.272: 98.0577% ( 1) 00:11:13.597 12.272 - 12.328: 98.0798% ( 3) 00:11:13.597 12.328 - 12.384: 98.1018% ( 3) 00:11:13.597 12.384 - 12.440: 98.1165% ( 2) 00:11:13.597 12.440 - 12.495: 98.1460% ( 4) 00:11:13.597 12.495 - 12.551: 98.1533% ( 1) 00:11:13.597 12.551 - 12.607: 98.1754% ( 3) 00:11:13.597 12.607 - 12.663: 98.1828% ( 1) 00:11:13.597 12.718 - 12.774: 98.2048% ( 3) 00:11:13.597 12.774 - 12.830: 98.2122% ( 1) 00:11:13.597 12.886 - 12.942: 98.2416% ( 4) 00:11:13.597 12.942 - 12.997: 98.2490% ( 1) 00:11:13.597 12.997 - 13.053: 98.2563% ( 1) 00:11:13.597 13.053 - 13.109: 98.2710% ( 2) 00:11:13.597 13.109 - 13.165: 98.2858% ( 2) 00:11:13.597 13.165 - 13.221: 98.3005% ( 2) 00:11:13.597 13.221 - 13.276: 98.3225% ( 3) 00:11:13.597 13.276 - 13.332: 98.3299% ( 1) 00:11:13.597 13.332 - 13.388: 98.3373% ( 1) 00:11:13.597 13.388 - 13.444: 98.3446% ( 1) 00:11:13.597 13.444 - 13.499: 98.3593% ( 2) 00:11:13.597 13.555 - 13.611: 98.3667% ( 1) 00:11:13.597 13.611 - 13.667: 98.3814% ( 2) 00:11:13.597 13.723 - 13.778: 98.3961% ( 2) 00:11:13.597 13.778 - 13.834: 98.4035% ( 1) 00:11:13.597 13.834 - 13.890: 98.4108% ( 1) 00:11:13.597 13.890 - 13.946: 98.4182% ( 1) 00:11:13.597 13.946 - 14.002: 98.4329% ( 2) 00:11:13.597 14.002 - 14.057: 98.4403% ( 1) 00:11:13.597 14.057 - 14.113: 98.4697% ( 4) 00:11:13.597 14.113 - 14.169: 98.4844% ( 2) 00:11:13.597 14.169 - 14.225: 98.5359% ( 7) 00:11:13.597 14.225 - 14.280: 98.5433% ( 1) 00:11:13.597 14.280 - 14.392: 98.5800% ( 5) 00:11:13.597 14.392 - 14.504: 98.6095% ( 4) 00:11:13.597 14.504 - 14.615: 98.6242% ( 2) 00:11:13.597 14.615 - 14.727: 98.6536% ( 4) 00:11:13.597 14.727 - 14.838: 98.6904% ( 5) 00:11:13.597 14.838 - 14.950: 98.7125% ( 3) 00:11:13.597 14.950 - 15.061: 98.7640% ( 7) 00:11:13.597 15.061 - 15.173: 98.8008% ( 5) 00:11:13.597 15.173 - 15.285: 98.8596% ( 8) 00:11:13.597 15.285 - 15.396: 98.9111% ( 7) 00:11:13.597 15.396 - 15.508: 98.9626% ( 7) 00:11:13.597 15.508 - 15.619: 98.9773% ( 2) 00:11:13.597 15.619 - 15.731: 98.9921% ( 2) 00:11:13.597 15.731 - 15.842: 99.0068% ( 2) 00:11:13.597 15.954 - 16.065: 99.0288% ( 3) 00:11:13.597 16.065 - 16.177: 99.0362% ( 1) 00:11:13.597 16.177 - 16.289: 99.0509% ( 2) 00:11:13.597 16.289 - 16.400: 99.0877% ( 5) 00:11:13.597 16.512 - 16.623: 99.0951% ( 1) 00:11:13.597 16.623 - 16.735: 99.1024% ( 1) 00:11:13.597 16.735 - 16.846: 99.1098% ( 1) 00:11:13.597 16.846 - 16.958: 99.1245% ( 2) 00:11:13.597 17.070 - 17.181: 99.1392% ( 2) 00:11:13.597 17.181 - 17.293: 99.1539% ( 2) 00:11:13.597 17.293 - 17.404: 99.1613% ( 1) 00:11:13.597 17.404 - 17.516: 99.1760% ( 2) 00:11:13.597 17.516 - 17.627: 99.1833% ( 1) 00:11:13.597 17.627 - 17.739: 99.1981% ( 2) 00:11:13.597 17.739 - 17.851: 99.2054% ( 1) 00:11:13.597 17.962 - 18.074: 99.2128% ( 1) 00:11:13.597 18.074 - 18.185: 99.2348% ( 3) 00:11:13.597 18.185 - 18.297: 99.2422% ( 1) 00:11:13.597 18.297 - 18.408: 99.2643% ( 3) 00:11:13.597 18.408 - 18.520: 99.2937% ( 4) 00:11:13.597 18.520 - 18.631: 99.3305% ( 5) 00:11:13.597 18.631 - 18.743: 99.4041% ( 10) 00:11:13.597 18.743 - 18.855: 99.4482% ( 6) 00:11:13.597 18.855 - 18.966: 99.4629% ( 2) 00:11:13.597 18.966 - 19.078: 99.5071% ( 6) 00:11:13.597 19.078 - 19.189: 99.5291% ( 3) 00:11:13.597 19.189 - 19.301: 99.5438% ( 2) 00:11:13.597 19.301 - 19.412: 99.5659% ( 3) 00:11:13.597 19.412 - 19.524: 99.5880% ( 3) 00:11:13.597 19.524 - 19.636: 99.5954% ( 1) 00:11:13.597 19.636 - 19.747: 99.6248% ( 4) 00:11:13.597 19.747 - 19.859: 99.6321% ( 1) 00:11:13.597 19.970 - 20.082: 99.6395% ( 1) 00:11:13.597 20.417 - 20.528: 99.6469% ( 1) 00:11:13.597 20.528 - 20.640: 99.6689% ( 3) 00:11:13.597 20.640 - 20.751: 99.6763% ( 1) 00:11:13.597 20.751 - 20.863: 99.6910% ( 2) 00:11:13.597 20.863 - 20.974: 99.6984% ( 1) 00:11:13.597 20.974 - 21.086: 99.7057% ( 1) 00:11:13.597 21.086 - 21.197: 99.7425% ( 5) 00:11:13.597 21.197 - 21.309: 99.7572% ( 2) 00:11:13.597 21.309 - 21.421: 99.7793% ( 3) 00:11:13.597 21.532 - 21.644: 99.8087% ( 4) 00:11:13.597 21.978 - 22.090: 99.8161% ( 1) 00:11:13.597 22.313 - 22.425: 99.8234% ( 1) 00:11:13.597 22.536 - 22.648: 99.8455% ( 3) 00:11:13.597 23.317 - 23.429: 99.8529% ( 1) 00:11:13.597 23.652 - 23.764: 99.8602% ( 1) 00:11:13.597 23.764 - 23.875: 99.8676% ( 1) 00:11:13.597 23.875 - 23.987: 99.8823% ( 2) 00:11:13.597 23.987 - 24.098: 99.8896% ( 1) 00:11:13.597 24.098 - 24.210: 99.9044% ( 2) 00:11:13.597 24.210 - 24.321: 99.9117% ( 1) 00:11:13.597 24.433 - 24.544: 99.9191% ( 1) 00:11:13.597 24.544 - 24.656: 99.9338% ( 2) 00:11:13.597 24.656 - 24.768: 99.9411% ( 1) 00:11:13.597 25.325 - 25.437: 99.9485% ( 1) 00:11:13.597 25.995 - 26.106: 99.9559% ( 1) 00:11:13.597 26.218 - 26.330: 99.9632% ( 1) 00:11:13.597 26.553 - 26.664: 99.9706% ( 1) 00:11:13.597 26.887 - 26.999: 99.9779% ( 1) 00:11:13.597 27.445 - 27.557: 99.9853% ( 1) 00:11:13.597 31.462 - 31.685: 99.9926% ( 1) 00:11:13.597 36.594 - 36.817: 100.0000% ( 1) 00:11:13.597 00:11:13.597 Complete histogram 00:11:13.597 ================== 00:11:13.597 Range in us Cumulative Count 00:11:13.597 3.710 - 3.737: 0.0147% ( 2) 00:11:13.597 3.737 - 3.765: 0.0221% ( 1) 00:11:13.597 3.793 - 3.821: 0.0736% ( 7) 00:11:13.597 3.821 - 3.849: 0.1177% ( 6) 00:11:13.597 3.849 - 3.877: 0.1766% ( 8) 00:11:13.597 3.877 - 3.905: 0.2281% ( 7) 00:11:13.597 3.905 - 3.933: 0.3164% ( 12) 00:11:13.597 3.933 - 3.961: 0.3531% ( 5) 00:11:13.597 3.961 - 3.988: 0.4046% ( 7) 00:11:13.597 3.988 - 4.016: 0.4929% ( 12) 00:11:13.597 4.016 - 4.044: 0.5444% ( 7) 00:11:13.597 4.044 - 4.072: 0.6033% ( 8) 00:11:13.597 4.072 - 4.100: 0.6695% ( 9) 00:11:13.597 4.100 - 4.128: 0.7578% ( 12) 00:11:13.597 4.128 - 4.156: 0.8608% ( 14) 00:11:13.597 4.156 - 4.184: 1.0521% ( 26) 00:11:13.597 4.184 - 4.212: 1.4347% ( 52) 00:11:13.597 4.212 - 4.239: 1.9717% ( 73) 00:11:13.597 4.239 - 4.267: 2.8399% ( 118) 00:11:13.597 4.267 - 4.295: 3.6860% ( 115) 00:11:13.597 4.295 - 4.323: 4.7307% ( 142) 00:11:13.597 4.323 - 4.351: 5.9962% ( 172) 00:11:13.597 4.351 - 4.379: 7.3131% ( 179) 00:11:13.597 4.379 - 4.407: 8.8655% ( 211) 00:11:13.597 4.407 - 4.435: 10.5577% ( 230) 00:11:13.597 4.435 - 4.463: 12.2499% ( 230) 00:11:13.597 4.463 - 4.491: 14.0597% ( 246) 00:11:13.597 4.491 - 4.518: 15.6636% ( 218) 00:11:13.597 4.518 - 4.546: 17.3926% ( 235) 00:11:13.597 4.546 - 4.574: 19.2172% ( 248) 00:11:13.597 4.574 - 4.602: 20.7549% ( 209) 00:11:13.597 4.602 - 4.630: 22.0644% ( 178) 00:11:13.597 4.630 - 4.658: 23.6021% ( 209) 00:11:13.597 4.658 - 4.686: 25.1766% ( 214) 00:11:13.597 4.686 - 4.714: 26.8835% ( 232) 00:11:13.597 4.714 - 4.742: 28.5609% ( 228) 00:11:13.597 4.742 - 4.769: 30.5989% ( 277) 00:11:13.597 4.769 - 4.797: 32.7766% ( 296) 00:11:13.597 4.797 - 4.825: 35.2487% ( 336) 00:11:13.597 4.825 - 4.853: 38.4638% ( 437) 00:11:13.597 4.853 - 4.881: 41.8702% ( 463) 00:11:13.597 4.881 - 4.909: 44.9603% ( 420) 00:11:13.597 4.909 - 4.937: 48.2931% ( 453) 00:11:13.597 4.937 - 4.965: 50.8019% ( 341) 00:11:13.597 4.965 - 4.993: 52.4132% ( 219) 00:11:13.597 4.993 - 5.020: 53.5536% ( 155) 00:11:13.597 5.020 - 5.048: 54.5983% ( 142) 00:11:13.597 5.048 - 5.076: 55.6651% ( 145) 00:11:13.597 5.076 - 5.104: 56.7393% ( 146) 00:11:13.597 5.104 - 5.132: 57.7766% ( 141) 00:11:13.597 5.132 - 5.160: 59.2775% ( 204) 00:11:13.598 5.160 - 5.188: 61.1757% ( 258) 00:11:13.598 5.188 - 5.216: 62.8605% ( 229) 00:11:13.598 5.216 - 5.244: 64.5821% ( 234) 00:11:13.598 5.244 - 5.271: 66.0903% ( 205) 00:11:13.598 5.271 - 5.299: 67.1939% ( 150) 00:11:13.598 5.299 - 5.327: 68.0621% ( 118) 00:11:13.598 5.327 - 5.355: 68.8714% ( 110) 00:11:13.598 5.355 - 5.383: 69.5262% ( 89) 00:11:13.598 5.383 - 5.411: 70.2031% ( 92) 00:11:13.598 5.411 - 5.439: 70.7622% ( 76) 00:11:13.598 5.439 - 5.467: 71.3949% ( 86) 00:11:13.598 5.467 - 5.495: 72.1380% ( 101) 00:11:13.598 5.495 - 5.523: 73.1018% ( 131) 00:11:13.598 5.523 - 5.550: 73.9847% ( 120) 00:11:13.598 5.550 - 5.578: 74.9485% ( 131) 00:11:13.598 5.578 - 5.606: 75.8387% ( 121) 00:11:13.598 5.606 - 5.634: 76.5818% ( 101) 00:11:13.598 5.634 - 5.662: 77.1778% ( 81) 00:11:13.598 5.662 - 5.690: 77.6928% ( 70) 00:11:13.598 5.690 - 5.718: 78.0901% ( 54) 00:11:13.598 5.718 - 5.746: 78.4138% ( 44) 00:11:13.598 5.746 - 5.774: 78.8037% ( 53) 00:11:13.598 5.774 - 5.801: 79.0539% ( 34) 00:11:13.598 5.801 - 5.829: 79.3481% ( 40) 00:11:13.598 5.829 - 5.857: 79.7160% ( 50) 00:11:13.598 5.857 - 5.885: 80.1133% ( 54) 00:11:13.598 5.885 - 5.913: 80.5989% ( 66) 00:11:13.598 5.913 - 5.941: 80.9888% ( 53) 00:11:13.598 5.941 - 5.969: 81.2316% ( 33) 00:11:13.598 5.969 - 5.997: 81.5480% ( 43) 00:11:13.598 5.997 - 6.025: 81.8275% ( 38) 00:11:13.598 6.025 - 6.052: 82.0409% ( 29) 00:11:13.598 6.052 - 6.080: 82.1954% ( 21) 00:11:13.598 6.080 - 6.108: 82.3205% ( 17) 00:11:13.598 6.108 - 6.136: 82.4308% ( 15) 00:11:13.598 6.136 - 6.164: 82.5706% ( 19) 00:11:13.598 6.164 - 6.192: 82.6663% ( 13) 00:11:13.598 6.192 - 6.220: 82.7913% ( 17) 00:11:13.598 6.220 - 6.248: 82.8502% ( 8) 00:11:13.598 6.248 - 6.276: 82.9091% ( 8) 00:11:13.598 6.276 - 6.303: 82.9900% ( 11) 00:11:13.598 6.303 - 6.331: 83.0856% ( 13) 00:11:13.598 6.331 - 6.359: 83.1886% ( 14) 00:11:13.598 6.359 - 6.387: 83.2475% ( 8) 00:11:13.598 6.387 - 6.415: 83.3873% ( 19) 00:11:13.598 6.415 - 6.443: 83.5050% ( 16) 00:11:13.598 6.443 - 6.471: 83.6448% ( 19) 00:11:13.598 6.471 - 6.499: 83.7404% ( 13) 00:11:13.598 6.499 - 6.527: 83.9979% ( 35) 00:11:13.598 6.527 - 6.554: 84.3437% ( 47) 00:11:13.598 6.554 - 6.582: 84.8587% ( 70) 00:11:13.598 6.582 - 6.610: 85.7490% ( 121) 00:11:13.598 6.610 - 6.638: 86.5730% ( 112) 00:11:13.598 6.638 - 6.666: 87.4264% ( 116) 00:11:13.598 6.666 - 6.694: 88.3829% ( 130) 00:11:13.598 6.694 - 6.722: 89.3467% ( 131) 00:11:13.598 6.722 - 6.750: 89.8985% ( 75) 00:11:13.598 6.750 - 6.778: 90.3988% ( 68) 00:11:13.598 6.778 - 6.806: 90.7004% ( 41) 00:11:13.598 6.806 - 6.833: 90.8843% ( 25) 00:11:13.598 6.833 - 6.861: 91.0903% ( 28) 00:11:13.598 6.861 - 6.889: 91.2743% ( 25) 00:11:13.598 6.889 - 6.917: 91.4656% ( 26) 00:11:13.598 6.917 - 6.945: 91.5686% ( 14) 00:11:13.598 6.945 - 6.973: 91.7010% ( 18) 00:11:13.598 6.973 - 7.001: 91.8702% ( 23) 00:11:13.598 7.001 - 7.029: 92.1204% ( 34) 00:11:13.598 7.029 - 7.057: 92.5177% ( 54) 00:11:13.598 7.057 - 7.084: 93.1357% ( 84) 00:11:13.598 7.084 - 7.112: 93.7390% ( 82) 00:11:13.598 7.112 - 7.140: 94.2466% ( 69) 00:11:13.598 7.140 - 7.196: 95.1957% ( 129) 00:11:13.598 7.196 - 7.252: 95.6371% ( 60) 00:11:13.598 7.252 - 7.308: 95.9682% ( 45) 00:11:13.598 7.308 - 7.363: 96.1521% ( 25) 00:11:13.598 7.363 - 7.419: 96.2625% ( 15) 00:11:13.598 7.419 - 7.475: 96.3508% ( 12) 00:11:13.598 7.475 - 7.531: 96.4464% ( 13) 00:11:13.598 7.531 - 7.586: 96.5053% ( 8) 00:11:13.598 7.586 - 7.642: 96.6083% ( 14) 00:11:13.598 7.642 - 7.698: 96.7849% ( 24) 00:11:13.598 7.698 - 7.754: 96.8658% ( 11) 00:11:13.598 7.754 - 7.810: 96.9614% ( 13) 00:11:13.598 7.810 - 7.865: 97.0203% ( 8) 00:11:13.598 7.865 - 7.921: 97.0865% ( 9) 00:11:13.598 7.921 - 7.977: 97.1086% ( 3) 00:11:13.598 7.977 - 8.033: 97.1527% ( 6) 00:11:13.598 8.033 - 8.089: 97.1675% ( 2) 00:11:13.598 8.089 - 8.144: 97.2042% ( 5) 00:11:13.598 8.144 - 8.200: 97.2116% ( 1) 00:11:13.598 8.200 - 8.256: 97.2337% ( 3) 00:11:13.598 8.256 - 8.312: 97.2778% ( 6) 00:11:13.598 8.312 - 8.367: 97.3072% ( 4) 00:11:13.598 8.367 - 8.423: 97.3293% ( 3) 00:11:13.598 8.423 - 8.479: 97.3367% ( 1) 00:11:13.598 8.479 - 8.535: 97.3440% ( 1) 00:11:13.598 8.591 - 8.646: 97.3514% ( 1) 00:11:13.598 8.646 - 8.702: 97.3735% ( 3) 00:11:13.598 8.702 - 8.758: 97.3955% ( 3) 00:11:13.598 8.758 - 8.814: 97.4102% ( 2) 00:11:13.598 8.814 - 8.869: 97.4323% ( 3) 00:11:13.598 8.869 - 8.925: 97.4397% ( 1) 00:11:13.598 8.925 - 8.981: 97.4617% ( 3) 00:11:13.598 8.981 - 9.037: 97.4691% ( 1) 00:11:13.598 9.037 - 9.093: 97.4838% ( 2) 00:11:13.598 9.093 - 9.148: 97.4985% ( 2) 00:11:13.598 9.148 - 9.204: 97.5132% ( 2) 00:11:13.598 9.204 - 9.260: 97.5206% ( 1) 00:11:13.598 9.260 - 9.316: 97.5500% ( 4) 00:11:13.598 9.316 - 9.372: 97.5868% ( 5) 00:11:13.598 9.372 - 9.427: 97.6015% ( 2) 00:11:13.598 9.427 - 9.483: 97.6457% ( 6) 00:11:13.598 9.483 - 9.539: 97.6530% ( 1) 00:11:13.598 9.539 - 9.595: 97.6604% ( 1) 00:11:13.598 9.595 - 9.650: 97.7045% ( 6) 00:11:13.598 9.650 - 9.706: 97.7266% ( 3) 00:11:13.598 9.762 - 9.818: 97.7487% ( 3) 00:11:13.598 9.818 - 9.874: 97.7928% ( 6) 00:11:13.598 9.874 - 9.929: 97.8370% ( 6) 00:11:13.598 9.929 - 9.985: 97.8443% ( 1) 00:11:13.598 9.985 - 10.041: 97.8664% ( 3) 00:11:13.598 10.097 - 10.152: 97.8885% ( 3) 00:11:13.598 10.152 - 10.208: 97.9032% ( 2) 00:11:13.598 10.208 - 10.264: 97.9400% ( 5) 00:11:13.598 10.264 - 10.320: 97.9620% ( 3) 00:11:13.598 10.320 - 10.376: 97.9768% ( 2) 00:11:13.598 10.376 - 10.431: 97.9841% ( 1) 00:11:13.598 10.431 - 10.487: 97.9988% ( 2) 00:11:13.598 10.487 - 10.543: 98.0209% ( 3) 00:11:13.598 10.543 - 10.599: 98.0283% ( 1) 00:11:13.598 10.599 - 10.655: 98.0650% ( 5) 00:11:13.598 10.655 - 10.710: 98.0871% ( 3) 00:11:13.598 10.710 - 10.766: 98.1313% ( 6) 00:11:13.598 10.766 - 10.822: 98.1754% ( 6) 00:11:13.598 10.822 - 10.878: 98.2122% ( 5) 00:11:13.598 10.878 - 10.933: 98.2269% ( 2) 00:11:13.598 10.933 - 10.989: 98.2343% ( 1) 00:11:13.598 10.989 - 11.045: 98.2416% ( 1) 00:11:13.598 11.045 - 11.101: 98.2563% ( 2) 00:11:13.598 11.157 - 11.212: 98.2710% ( 2) 00:11:13.598 11.268 - 11.324: 98.2858% ( 2) 00:11:13.598 11.324 - 11.380: 98.2931% ( 1) 00:11:13.598 11.380 - 11.435: 98.3152% ( 3) 00:11:13.598 11.491 - 11.547: 98.3225% ( 1) 00:11:13.598 11.547 - 11.603: 98.3373% ( 2) 00:11:13.598 11.603 - 11.659: 98.3446% ( 1) 00:11:13.598 11.659 - 11.714: 98.3740% ( 4) 00:11:13.598 11.714 - 11.770: 98.3888% ( 2) 00:11:13.598 11.770 - 11.826: 98.4108% ( 3) 00:11:13.598 11.826 - 11.882: 98.4182% ( 1) 00:11:13.598 11.882 - 11.938: 98.4476% ( 4) 00:11:13.598 11.938 - 11.993: 98.4623% ( 2) 00:11:13.598 11.993 - 12.049: 98.4770% ( 2) 00:11:13.598 12.049 - 12.105: 98.4918% ( 2) 00:11:13.598 12.105 - 12.161: 98.4991% ( 1) 00:11:13.598 12.161 - 12.216: 98.5212% ( 3) 00:11:13.598 12.216 - 12.272: 98.5506% ( 4) 00:11:13.598 12.328 - 12.384: 98.5874% ( 5) 00:11:13.598 12.384 - 12.440: 98.6168% ( 4) 00:11:13.598 12.440 - 12.495: 98.6536% ( 5) 00:11:13.598 12.551 - 12.607: 98.6757% ( 3) 00:11:13.598 12.607 - 12.663: 98.6904% ( 2) 00:11:13.598 12.718 - 12.774: 98.7051% ( 2) 00:11:13.598 12.774 - 12.830: 98.7345% ( 4) 00:11:13.598 12.830 - 12.886: 98.7713% ( 5) 00:11:13.598 12.886 - 12.942: 98.7787% ( 1) 00:11:13.598 12.942 - 12.997: 98.8008% ( 3) 00:11:13.598 12.997 - 13.053: 98.8081% ( 1) 00:11:13.598 13.165 - 13.221: 98.8155% ( 1) 00:11:13.598 13.276 - 13.332: 98.8302% ( 2) 00:11:13.598 13.332 - 13.388: 98.8376% ( 1) 00:11:13.598 13.611 - 13.667: 98.8449% ( 1) 00:11:13.598 13.778 - 13.834: 98.8523% ( 1) 00:11:13.598 13.834 - 13.890: 98.8596% ( 1) 00:11:13.598 13.890 - 13.946: 98.8670% ( 1) 00:11:13.598 13.946 - 14.002: 98.8743% ( 1) 00:11:13.598 14.002 - 14.057: 98.8817% ( 1) 00:11:13.598 14.225 - 14.280: 98.8891% ( 1) 00:11:13.598 14.280 - 14.392: 98.8964% ( 1) 00:11:13.598 14.392 - 14.504: 98.9038% ( 1) 00:11:13.598 14.504 - 14.615: 98.9111% ( 1) 00:11:13.598 14.615 - 14.727: 98.9185% ( 1) 00:11:13.598 14.727 - 14.838: 98.9479% ( 4) 00:11:13.598 14.838 - 14.950: 98.9921% ( 6) 00:11:13.598 14.950 - 15.061: 98.9994% ( 1) 00:11:13.598 15.061 - 15.173: 99.0141% ( 2) 00:11:13.598 15.173 - 15.285: 99.0288% ( 2) 00:11:13.598 15.396 - 15.508: 99.0730% ( 6) 00:11:13.598 15.508 - 15.619: 99.0951% ( 3) 00:11:13.598 15.619 - 15.731: 99.1098% ( 2) 00:11:13.598 15.731 - 15.842: 99.1686% ( 8) 00:11:13.598 15.842 - 15.954: 99.2054% ( 5) 00:11:13.598 15.954 - 16.065: 99.2569% ( 7) 00:11:13.598 16.065 - 16.177: 99.3011% ( 6) 00:11:13.598 16.177 - 16.289: 99.3820% ( 11) 00:11:13.598 16.289 - 16.400: 99.4556% ( 10) 00:11:13.598 16.400 - 16.512: 99.5144% ( 8) 00:11:13.598 16.512 - 16.623: 99.5438% ( 4) 00:11:13.598 16.623 - 16.735: 99.5733% ( 4) 00:11:13.598 16.735 - 16.846: 99.5880% ( 2) 00:11:13.598 16.846 - 16.958: 99.6248% ( 5) 00:11:13.598 16.958 - 17.070: 99.6395% ( 2) 00:11:13.599 17.181 - 17.293: 99.6469% ( 1) 00:11:13.599 17.293 - 17.404: 99.6763% ( 4) 00:11:13.599 17.404 - 17.516: 99.6836% ( 1) 00:11:13.599 17.627 - 17.739: 99.6910% ( 1) 00:11:13.599 17.739 - 17.851: 99.6984% ( 1) 00:11:13.599 17.851 - 17.962: 99.7057% ( 1) 00:11:13.599 17.962 - 18.074: 99.7131% ( 1) 00:11:13.599 18.185 - 18.297: 99.7351% ( 3) 00:11:13.599 18.297 - 18.408: 99.7499% ( 2) 00:11:13.599 18.520 - 18.631: 99.7572% ( 1) 00:11:13.599 18.743 - 18.855: 99.7646% ( 1) 00:11:13.599 18.855 - 18.966: 99.7719% ( 1) 00:11:13.599 18.966 - 19.078: 99.7793% ( 1) 00:11:13.599 19.189 - 19.301: 99.7940% ( 2) 00:11:13.599 19.859 - 19.970: 99.8161% ( 3) 00:11:13.599 20.751 - 20.863: 99.8308% ( 2) 00:11:13.599 20.974 - 21.086: 99.8455% ( 2) 00:11:13.599 21.086 - 21.197: 99.8529% ( 1) 00:11:13.599 21.197 - 21.309: 99.8676% ( 2) 00:11:13.599 21.309 - 21.421: 99.8896% ( 3) 00:11:13.599 21.421 - 21.532: 99.8970% ( 1) 00:11:13.599 21.644 - 21.755: 99.9044% ( 1) 00:11:13.599 21.755 - 21.867: 99.9191% ( 2) 00:11:13.599 21.978 - 22.090: 99.9338% ( 2) 00:11:13.599 22.090 - 22.202: 99.9485% ( 2) 00:11:13.599 22.983 - 23.094: 99.9559% ( 1) 00:11:13.599 23.094 - 23.206: 99.9632% ( 1) 00:11:13.599 23.429 - 23.540: 99.9706% ( 1) 00:11:13.599 23.540 - 23.652: 99.9779% ( 1) 00:11:13.599 24.210 - 24.321: 99.9853% ( 1) 00:11:13.599 24.321 - 24.433: 99.9926% ( 1) 00:11:13.599 31.015 - 31.238: 100.0000% ( 1) 00:11:13.599 00:11:13.599 00:11:13.599 real 0m1.493s 00:11:13.599 user 0m1.019s 00:11:13.599 sys 0m0.473s 00:11:13.599 13:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.599 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.599 ************************************ 00:11:13.599 END TEST nvme_overhead 00:11:13.599 ************************************ 00:11:13.599 13:34:52 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:13.599 13:34:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:13.599 13:34:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.599 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.599 ************************************ 00:11:13.599 START TEST nvme_arbitration 00:11:13.599 ************************************ 00:11:13.599 13:34:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:13.857 EAL: TSC is not safe to use in SMP mode 00:11:13.857 EAL: TSC is not invariant 00:11:13.857 [2024-07-10 13:34:53.104123] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:18.106 Initializing NVMe Controllers 00:11:18.106 Attaching to 0000:00:06.0 00:11:18.106 Attached to 0000:00:06.0 00:11:18.106 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:18.106 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:11:18.106 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:11:18.106 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:11:18.106 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:18.106 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:18.106 Initialization complete. Launching workers. 00:11:18.106 Starting thread on core 1 with urgent priority queue 00:11:18.106 Starting thread on core 2 with urgent priority queue 00:11:18.106 Starting thread on core 3 with urgent priority queue 00:11:18.106 Starting thread on core 0 with urgent priority queue 00:11:18.106 QEMU NVMe Ctrl (12340 ) core 0: 5693.67 IO/s 17.56 secs/100000 ios 00:11:18.106 QEMU NVMe Ctrl (12340 ) core 1: 5936.33 IO/s 16.85 secs/100000 ios 00:11:18.106 QEMU NVMe Ctrl (12340 ) core 2: 5746.67 IO/s 17.40 secs/100000 ios 00:11:18.106 QEMU NVMe Ctrl (12340 ) core 3: 5752.67 IO/s 17.38 secs/100000 ios 00:11:18.106 ======================================================== 00:11:18.106 00:11:18.106 00:11:18.106 real 0m4.614s 00:11:18.106 user 0m13.151s 00:11:18.106 sys 0m0.497s 00:11:18.106 13:34:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.106 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.106 ************************************ 00:11:18.106 END TEST nvme_arbitration 00:11:18.106 ************************************ 00:11:18.106 13:34:57 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:18.106 13:34:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:18.106 13:34:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:18.106 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.106 ************************************ 00:11:18.106 START TEST nvme_single_aen 00:11:18.106 ************************************ 00:11:18.106 13:34:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:18.106 [2024-07-10 13:34:57.337797] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:18.106 [2024-07-10 13:34:57.337979] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:18.733 EAL: TSC is not safe to use in SMP mode 00:11:18.733 EAL: TSC is not invariant 00:11:18.733 [2024-07-10 13:34:58.086932] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:18.733 [2024-07-10 13:34:58.091635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:18.991 Asynchronous Event Request test 00:11:18.991 Attaching to 0000:00:06.0 00:11:18.991 Attached to 0000:00:06.0 00:11:18.991 Reset controller to setup AER completions for this process 00:11:18.991 Registering asynchronous event callbacks... 00:11:18.991 Getting orig temperature thresholds of all controllers 00:11:18.991 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:18.991 Setting all controllers temperature threshold low to trigger AER 00:11:18.991 Waiting for all controllers temperature threshold to be set lower 00:11:18.991 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:18.991 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:18.991 Waiting for all controllers to trigger AER and reset threshold 00:11:18.991 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.991 Cleaning up... 00:11:18.991 00:11:18.991 real 0m0.823s 00:11:18.991 user 0m0.030s 00:11:18.991 sys 0m0.792s 00:11:18.991 13:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.991 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:18.991 ************************************ 00:11:18.991 END TEST nvme_single_aen 00:11:18.991 ************************************ 00:11:18.991 13:34:58 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:18.991 13:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:18.991 13:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:18.991 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:18.991 ************************************ 00:11:18.991 START TEST nvme_doorbell_aers 00:11:18.991 ************************************ 00:11:18.991 13:34:58 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:18.991 13:34:58 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:18.991 13:34:58 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:18.991 13:34:58 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:18.991 13:34:58 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:18.991 13:34:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:18.991 13:34:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:18.991 13:34:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:18.991 13:34:58 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.991 13:34:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:18.991 13:34:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:18.991 13:34:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:18.991 13:34:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:18.991 13:34:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:19.560 EAL: TSC is not safe to use in SMP mode 00:11:19.560 EAL: TSC is not invariant 00:11:19.560 [2024-07-10 13:34:58.728458] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:19.560 Executing: test_write_invalid_db 00:11:19.560 Waiting for AER completion... 00:11:19.560 Asynchronous Event received. 00:11:19.560 Error Informaton Log Page received. 00:11:19.560 Success: test_write_invalid_db 00:11:19.560 00:11:19.560 Executing: test_invalid_db_write_overflow_sq 00:11:19.560 Waiting for AER completion... 00:11:19.560 Asynchronous Event received. 00:11:19.560 Error Informaton Log Page received. 00:11:19.560 Success: test_invalid_db_write_overflow_sq 00:11:19.560 00:11:19.560 Executing: test_invalid_db_write_overflow_cq 00:11:19.560 Waiting for AER completion... 00:11:19.560 Asynchronous Event received. 00:11:19.560 Error Informaton Log Page received. 00:11:19.560 Success: test_invalid_db_write_overflow_cq 00:11:19.560 00:11:19.560 00:11:19.560 real 0m0.581s 00:11:19.560 user 0m0.037s 00:11:19.560 sys 0m0.560s 00:11:19.560 13:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.560 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:19.560 ************************************ 00:11:19.560 END TEST nvme_doorbell_aers 00:11:19.560 ************************************ 00:11:19.560 13:34:58 -- nvme/nvme.sh@97 -- # uname 00:11:19.560 13:34:58 -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:11:19.560 13:34:58 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:19.560 13:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:19.560 13:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.560 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:19.560 ************************************ 00:11:19.560 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:19.560 ************************************ 00:11:19.560 13:34:58 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:19.819 * Looking for test storage... 00:11:19.819 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:19.819 13:34:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:19.819 13:34:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:19.819 13:34:59 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:19.819 13:34:59 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:19.819 13:34:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:19.819 13:34:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:19.819 13:34:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:19.819 13:34:59 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:19.819 13:34:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:19.819 13:34:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:19.819 13:34:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:19.819 13:34:59 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=54576 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:19.819 13:34:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 54576 00:11:19.819 13:34:59 -- common/autotest_common.sh@819 -- # '[' -z 54576 ']' 00:11:19.819 13:34:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.819 13:34:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:19.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.819 13:34:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.819 13:34:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:19.819 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 [2024-07-10 13:34:59.111387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:19.819 [2024-07-10 13:34:59.111663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:20.405 EAL: TSC is not safe to use in SMP mode 00:11:20.405 EAL: TSC is not invariant 00:11:20.405 [2024-07-10 13:34:59.574587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.405 [2024-07-10 13:34:59.656673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:20.405 [2024-07-10 13:34:59.656911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.405 [2024-07-10 13:34:59.657572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.405 [2024-07-10 13:34:59.657529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.405 [2024-07-10 13:34:59.657573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.675 13:35:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:20.675 13:35:00 -- common/autotest_common.sh@852 -- # return 0 00:11:20.675 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:20.675 13:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:20.675 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 [2024-07-10 13:35:00.023567] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:20.934 nvme0n1 00:11:20.934 13:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:20.934 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:20.934 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:11:20.934 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:20.934 13:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:20.935 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:20.935 true 00:11:20.935 13:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720618500 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=54584 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:20.935 13:35:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:23.470 13:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.470 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:11:23.470 [2024-07-10 13:35:02.236923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:23.470 [2024-07-10 13:35:02.237066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:23.470 [2024-07-10 13:35:02.237091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:23.470 [2024-07-10 13:35:02.237099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.470 [2024-07-10 13:35:02.237605] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:23.470 13:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.470 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 54584 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 54584 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 54584 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:23.470 13:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.470 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:11:23.470 13:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.bWAW1V 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.gAIZy9 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 54576 00:11:23.470 13:35:02 -- common/autotest_common.sh@926 -- # '[' -z 54576 ']' 00:11:23.470 13:35:02 -- common/autotest_common.sh@930 -- # kill -0 54576 00:11:23.470 13:35:02 -- common/autotest_common.sh@931 -- # uname 00:11:23.470 13:35:02 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:23.470 13:35:02 -- common/autotest_common.sh@934 -- # ps -c -o command 54576 00:11:23.470 13:35:02 -- common/autotest_common.sh@934 -- # tail -1 00:11:23.470 13:35:02 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:23.470 13:35:02 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:23.470 killing process with pid 54576 00:11:23.470 13:35:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54576' 00:11:23.470 13:35:02 -- common/autotest_common.sh@945 -- # kill 54576 00:11:23.470 13:35:02 -- common/autotest_common.sh@950 -- # wait 54576 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:23.470 13:35:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:23.470 00:11:23.470 real 0m3.712s 00:11:23.470 user 0m12.055s 00:11:23.470 sys 0m0.781s 00:11:23.470 13:35:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.470 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:11:23.470 ************************************ 00:11:23.470 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:23.470 ************************************ 00:11:23.470 13:35:02 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:23.470 13:35:02 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:23.470 13:35:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:23.470 13:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:23.470 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:11:23.470 ************************************ 00:11:23.470 START TEST nvme_fio 00:11:23.470 ************************************ 00:11:23.470 13:35:02 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:11:23.470 13:35:02 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:23.470 13:35:02 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:23.470 13:35:02 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:23.470 13:35:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:23.470 13:35:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:23.470 13:35:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:23.470 13:35:02 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:23.470 13:35:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:23.470 13:35:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:23.470 13:35:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:23.470 13:35:02 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:11:23.470 13:35:02 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:23.470 13:35:02 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:23.470 13:35:02 -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:23.470 13:35:02 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:24.036 EAL: TSC is not safe to use in SMP mode 00:11:24.036 EAL: TSC is not invariant 00:11:24.036 [2024-07-10 13:35:03.124170] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:24.036 13:35:03 -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:24.036 13:35:03 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:24.295 EAL: TSC is not safe to use in SMP mode 00:11:24.295 EAL: TSC is not invariant 00:11:24.295 [2024-07-10 13:35:03.621805] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:24.554 13:35:03 -- nvme/nvme.sh@41 -- # bs=4096 00:11:24.554 13:35:03 -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:24.555 13:35:03 -- common/autotest_common.sh@1339 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:24.555 13:35:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:24.555 13:35:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:24.555 13:35:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:24.555 13:35:03 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.555 13:35:03 -- common/autotest_common.sh@1320 -- # shift 00:11:24.555 13:35:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:24.555 13:35:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:24.555 13:35:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:24.555 13:35:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:24.555 13:35:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:24.555 13:35:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:24.555 13:35:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:24.555 13:35:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:24.555 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:24.555 fio-3.35 00:11:24.555 Starting 1 thread 00:11:25.123 EAL: TSC is not safe to use in SMP mode 00:11:25.123 EAL: TSC is not invariant 00:11:25.123 [2024-07-10 13:35:04.233581] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:35.163 00:11:35.163 test: (groupid=0, jobs=1): err= 0: pid=102811: Wed Jul 10 13:35:13 2024 00:11:35.163 read: IOPS=55.9k, BW=218MiB/s (229MB/s)(437MiB/2001msec) 00:11:35.163 slat (nsec): min=441, max=40237, avg=514.81, stdev=251.30 00:11:35.163 clat (usec): min=237, max=6121, avg=1144.15, stdev=239.83 00:11:35.163 lat (usec): min=243, max=6161, avg=1144.67, stdev=239.91 00:11:35.163 clat percentiles (usec): 00:11:35.163 | 1.00th=[ 873], 5.00th=[ 922], 10.00th=[ 955], 20.00th=[ 1004], 00:11:35.163 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:11:35.163 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1352], 00:11:35.163 | 99.00th=[ 2147], 99.50th=[ 2606], 99.90th=[ 3884], 99.95th=[ 4293], 00:11:35.163 | 99.99th=[ 5014] 00:11:35.163 bw ( KiB/s): min=216107, max=229576, per=99.62%, avg=222750.00, stdev=6736.36, samples=3 00:11:35.163 iops : min=54026, max=57394, avg=55687.00, stdev=1684.47, samples=3 00:11:35.163 write: IOPS=55.8k, BW=218MiB/s (228MB/s)(436MiB/2001msec); 0 zone resets 00:11:35.163 slat (nsec): min=459, max=46669, avg=797.56, stdev=343.99 00:11:35.163 clat (usec): min=247, max=5174, avg=1145.06, stdev=236.79 00:11:35.163 lat (usec): min=248, max=5179, avg=1145.85, stdev=236.88 00:11:35.163 clat percentiles (usec): 00:11:35.163 | 1.00th=[ 873], 5.00th=[ 922], 10.00th=[ 955], 20.00th=[ 1004], 00:11:35.163 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:11:35.164 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1352], 00:11:35.164 | 99.00th=[ 2147], 99.50th=[ 2573], 99.90th=[ 3851], 99.95th=[ 4293], 00:11:35.164 | 99.99th=[ 5014] 00:11:35.164 bw ( KiB/s): min=216237, max=228625, per=99.48%, avg=221879.67, stdev=6267.18, samples=3 00:11:35.164 iops : min=54059, max=57156, avg=55469.67, stdev=1566.79, samples=3 00:11:35.164 lat (usec) : 250=0.01%, 500=0.10%, 750=0.21%, 1000=19.41% 00:11:35.164 lat (msec) : 2=79.11%, 4=1.09%, 10=0.07% 00:11:35.164 cpu : usr=100.05%, sys=0.00%, ctx=24, majf=0, minf=3 00:11:35.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.164 issued rwts: total=111856,111579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.164 00:11:35.164 Run status group 0 (all jobs): 00:11:35.164 READ: bw=218MiB/s (229MB/s), 218MiB/s-218MiB/s (229MB/s-229MB/s), io=437MiB (458MB), run=2001-2001msec 00:11:35.164 WRITE: bw=218MiB/s (228MB/s), 218MiB/s-218MiB/s (228MB/s-228MB/s), io=436MiB (457MB), run=2001-2001msec 00:11:35.164 13:35:13 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:35.164 13:35:13 -- nvme/nvme.sh@46 -- # true 00:11:35.164 00:11:35.164 real 0m11.380s 00:11:35.164 user 0m9.416s 00:11:35.164 sys 0m1.913s 00:11:35.164 13:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.164 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 ************************************ 00:11:35.164 END TEST nvme_fio 00:11:35.164 ************************************ 00:11:35.164 00:11:35.164 real 0m31.655s 00:11:35.164 user 0m39.187s 00:11:35.164 sys 0m10.632s 00:11:35.164 13:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.164 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 ************************************ 00:11:35.164 END TEST nvme 00:11:35.164 ************************************ 00:11:35.164 13:35:14 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@227 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:35.164 13:35:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:35.164 13:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.164 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 ************************************ 00:11:35.164 START TEST nvme_scc 00:11:35.164 ************************************ 00:11:35.164 13:35:14 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:35.164 * Looking for test storage... 00:11:35.164 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.164 13:35:14 -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:35.164 13:35:14 -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:35.164 13:35:14 -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:35.164 13:35:14 -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:11:35.164 13:35:14 -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.164 13:35:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.164 13:35:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.164 13:35:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.164 13:35:14 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:35.164 13:35:14 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:35.164 13:35:14 -- paths/export.sh@4 -- # export PATH 00:11:35.164 13:35:14 -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:35.164 13:35:14 -- nvme/functions.sh@10 -- # ctrls=() 00:11:35.164 13:35:14 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:35.164 13:35:14 -- nvme/functions.sh@11 -- # nvmes=() 00:11:35.164 13:35:14 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:35.164 13:35:14 -- nvme/functions.sh@12 -- # bdfs=() 00:11:35.164 13:35:14 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:35.164 13:35:14 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:35.164 13:35:14 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:35.164 13:35:14 -- nvme/functions.sh@14 -- # nvme_name= 00:11:35.164 13:35:14 -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.164 13:35:14 -- nvme/nvme_scc.sh@12 -- # uname 00:11:35.164 13:35:14 -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:11:35.164 13:35:14 -- nvme/nvme_scc.sh@12 -- # exit 0 00:11:35.164 00:11:35.164 real 0m0.227s 00:11:35.164 user 0m0.150s 00:11:35.164 sys 0m0.171s 00:11:35.164 13:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.164 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 ************************************ 00:11:35.164 END TEST nvme_scc 00:11:35.164 ************************************ 00:11:35.164 13:35:14 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:11:35.164 13:35:14 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:35.164 13:35:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:35.164 13:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.164 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 ************************************ 00:11:35.164 START TEST nvme_rpc 00:11:35.164 ************************************ 00:11:35.164 13:35:14 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:35.164 * Looking for test storage... 00:11:35.164 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.164 13:35:14 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.164 13:35:14 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:35.164 13:35:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:35.164 13:35:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:35.164 13:35:14 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:35.164 13:35:14 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:35.164 13:35:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:35.164 13:35:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:35.164 13:35:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:35.164 13:35:14 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:35.164 13:35:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:35.423 13:35:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:35.423 13:35:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:35.423 13:35:14 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:35.423 13:35:14 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:35.423 13:35:14 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=54792 00:11:35.423 13:35:14 -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:35.424 13:35:14 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:35.424 13:35:14 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 54792 00:11:35.424 13:35:14 -- common/autotest_common.sh@819 -- # '[' -z 54792 ']' 00:11:35.424 13:35:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.424 13:35:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.424 13:35:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.424 13:35:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:35.424 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 [2024-07-10 13:35:14.600755] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:35.424 [2024-07-10 13:35:14.601066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:35.683 EAL: TSC is not safe to use in SMP mode 00:11:35.683 EAL: TSC is not invariant 00:11:35.942 [2024-07-10 13:35:15.060947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.942 [2024-07-10 13:35:15.155847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:35.942 [2024-07-10 13:35:15.156120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.942 [2024-07-10 13:35:15.156122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.509 13:35:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.509 13:35:15 -- common/autotest_common.sh@852 -- # return 0 00:11:36.509 13:35:15 -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:36.509 [2024-07-10 13:35:15.771186] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:36.509 Nvme0n1 00:11:36.509 13:35:15 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:36.509 13:35:15 -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:36.767 request: 00:11:36.767 { 00:11:36.767 "filename": "non_existing_file", 00:11:36.767 "bdev_name": "Nvme0n1", 00:11:36.767 "method": "bdev_nvme_apply_firmware", 00:11:36.767 "req_id": 1 00:11:36.767 } 00:11:36.767 Got JSON-RPC error response 00:11:36.767 response: 00:11:36.767 { 00:11:36.767 "code": -32603, 00:11:36.767 "message": "open file failed." 00:11:36.767 } 00:11:36.767 13:35:16 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:36.767 13:35:16 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:36.767 13:35:16 -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:37.024 13:35:16 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:37.024 13:35:16 -- nvme/nvme_rpc.sh@40 -- # killprocess 54792 00:11:37.024 13:35:16 -- common/autotest_common.sh@926 -- # '[' -z 54792 ']' 00:11:37.024 13:35:16 -- common/autotest_common.sh@930 -- # kill -0 54792 00:11:37.024 13:35:16 -- common/autotest_common.sh@931 -- # uname 00:11:37.024 13:35:16 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:37.024 13:35:16 -- common/autotest_common.sh@934 -- # ps -c -o command 54792 00:11:37.024 13:35:16 -- common/autotest_common.sh@934 -- # tail -1 00:11:37.024 13:35:16 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:37.024 13:35:16 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:37.024 killing process with pid 54792 00:11:37.024 13:35:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54792' 00:11:37.024 13:35:16 -- common/autotest_common.sh@945 -- # kill 54792 00:11:37.024 13:35:16 -- common/autotest_common.sh@950 -- # wait 54792 00:11:37.281 00:11:37.281 real 0m2.240s 00:11:37.281 user 0m3.967s 00:11:37.281 sys 0m0.829s 00:11:37.281 13:35:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.282 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.282 ************************************ 00:11:37.282 END TEST nvme_rpc 00:11:37.282 ************************************ 00:11:37.282 13:35:16 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:37.282 13:35:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:37.282 13:35:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.282 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.282 ************************************ 00:11:37.282 START TEST nvme_rpc_timeouts 00:11:37.282 ************************************ 00:11:37.282 13:35:16 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:37.540 * Looking for test storage... 00:11:37.540 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_54821 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_54821 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=54848 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:37.540 13:35:16 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 54848 00:11:37.540 13:35:16 -- common/autotest_common.sh@819 -- # '[' -z 54848 ']' 00:11:37.540 13:35:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.540 13:35:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:37.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.540 13:35:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.540 13:35:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:37.540 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.540 [2024-07-10 13:35:16.838039] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:37.540 [2024-07-10 13:35:16.838391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:38.108 EAL: TSC is not safe to use in SMP mode 00:11:38.108 EAL: TSC is not invariant 00:11:38.108 [2024-07-10 13:35:17.290681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.108 [2024-07-10 13:35:17.383932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.108 [2024-07-10 13:35:17.384173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.108 [2024-07-10 13:35:17.384173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.674 13:35:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:38.674 13:35:17 -- common/autotest_common.sh@852 -- # return 0 00:11:38.674 Checking default timeout settings: 00:11:38.674 13:35:17 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:38.674 13:35:17 -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:38.933 Making settings changes with rpc: 00:11:38.933 13:35:18 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:38.933 13:35:18 -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:39.190 Check default vs. modified settings: 00:11:39.190 13:35:18 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:39.190 13:35:18 -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:39.449 Setting action_on_timeout is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:39.449 Setting timeout_us is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:39.449 Setting timeout_admin_us is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_54821 /tmp/settings_modified_54821 00:11:39.449 13:35:18 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 54848 00:11:39.449 13:35:18 -- common/autotest_common.sh@926 -- # '[' -z 54848 ']' 00:11:39.449 13:35:18 -- common/autotest_common.sh@930 -- # kill -0 54848 00:11:39.449 13:35:18 -- common/autotest_common.sh@931 -- # uname 00:11:39.449 13:35:18 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:39.449 13:35:18 -- common/autotest_common.sh@934 -- # tail -1 00:11:39.449 13:35:18 -- common/autotest_common.sh@934 -- # ps -c -o command 54848 00:11:39.449 13:35:18 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:39.449 13:35:18 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:39.449 killing process with pid 54848 00:11:39.450 13:35:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54848' 00:11:39.450 13:35:18 -- common/autotest_common.sh@945 -- # kill 54848 00:11:39.450 13:35:18 -- common/autotest_common.sh@950 -- # wait 54848 00:11:39.708 RPC TIMEOUT SETTING TEST PASSED. 00:11:39.708 13:35:18 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:39.708 00:11:39.708 real 0m2.271s 00:11:39.708 user 0m4.252s 00:11:39.708 sys 0m0.737s 00:11:39.708 13:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.708 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:11:39.708 ************************************ 00:11:39.708 END TEST nvme_rpc_timeouts 00:11:39.708 ************************************ 00:11:39.708 13:35:18 -- spdk/autotest.sh@251 -- # '[' 0 -eq 0 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@251 -- # uname -s 00:11:39.708 13:35:18 -- spdk/autotest.sh@251 -- # '[' FreeBSD = Linux ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:11:39.708 13:35:18 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@268 -- # timing_exit lib 00:11:39.708 13:35:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:39.708 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:11:39.708 13:35:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:11:39.708 13:35:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:11:39.708 13:35:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:11:39.708 13:35:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:11:39.708 13:35:18 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:11:39.708 13:35:18 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:11:39.708 13:35:18 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:11:39.708 13:35:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:39.708 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:11:39.708 13:35:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:11:39.708 13:35:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:11:39.708 13:35:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:11:39.708 13:35:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 setup.sh cleanup function not yet supported on FreeBSD 00:11:40.644 13:35:19 -- common/autotest_common.sh@1436 -- # return 0 00:11:40.644 13:35:19 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:11:40.644 13:35:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:40.644 13:35:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 13:35:19 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:11:40.644 13:35:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:40.644 13:35:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 13:35:19 -- spdk/autotest.sh@390 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:11:40.644 13:35:19 -- spdk/autotest.sh@392 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:11:40.644 13:35:19 -- spdk/autotest.sh@394 -- # hash lcov 00:11:40.644 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 394: hash: lcov: not found 00:11:40.644 13:35:19 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.644 13:35:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:40.644 13:35:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.644 13:35:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.644 13:35:19 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.644 13:35:19 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.644 13:35:19 -- paths/export.sh@4 -- $ export PATH 00:11:40.644 13:35:19 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.644 13:35:19 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:40.644 13:35:19 -- common/autobuild_common.sh@435 -- $ date +%s 00:11:40.644 13:35:19 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720618519.XXXXXX 00:11:40.644 13:35:19 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720618519.XXXXXX.KSEhBmUB 00:11:40.644 13:35:19 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:11:40.644 13:35:19 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:11:40.644 13:35:19 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:11:40.644 13:35:19 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:40.644 13:35:19 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:40.644 13:35:19 -- common/autobuild_common.sh@451 -- $ get_config_params 00:11:40.644 13:35:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:11:40.644 13:35:19 -- common/autotest_common.sh@10 -- $ set +x 00:11:40.902 13:35:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:11:40.902 13:35:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:11:40.902 13:35:20 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:11:40.902 13:35:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:11:40.902 13:35:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:11:40.902 13:35:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:11:40.902 13:35:20 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:11:40.902 13:35:20 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:11:40.902 13:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:11:40.902 13:35:20 -- spdk/autopackage.sh@26 -- $ [[ /usr/bin/clang == *clang* ]] 00:11:40.902 13:35:20 -- spdk/autopackage.sh@27 -- $ nproc 00:11:40.902 13:35:20 -- spdk/autopackage.sh@27 -- $ jobs=5 00:11:40.902 13:35:20 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:11:40.902 13:35:20 -- spdk/autopackage.sh@28 -- $ uname -s 00:11:40.902 13:35:20 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:11:40.902 13:35:20 -- spdk/autopackage.sh@32 -- $ export LD=ld.lld 00:11:40.902 13:35:20 -- spdk/autopackage.sh@32 -- $ LD=ld.lld 00:11:40.902 13:35:20 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:11:40.902 13:35:20 -- spdk/autopackage.sh@40 -- $ get_config_params 00:11:40.902 13:35:20 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:11:40.902 13:35:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:11:40.902 13:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:11:40.902 13:35:20 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:11:40.902 13:35:20 -- spdk/autopackage.sh@41 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-lto 00:11:41.161 Notice: Vhost, rte_vhost library, virtio, and fuse 00:11:41.161 are only supported on Linux. Turning off default feature. 00:11:41.161 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:41.161 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:41.421 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:11:41.421 Using 'verbs' RDMA provider 00:11:51.681 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:12:01.649 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:12:01.649 Creating mk/config.mk...done. 00:12:01.649 Creating mk/cc.flags.mk...done. 00:12:01.649 Type 'gmake' to build. 00:12:01.649 13:35:40 -- spdk/autopackage.sh@43 -- $ gmake -j10 00:12:01.649 gmake[1]: Nothing to be done for 'all'. 00:12:01.649 ps: stdin: not a terminal 00:12:08.215 The Meson build system 00:12:08.215 Version: 1.3.1 00:12:08.215 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:12:08.215 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:08.215 Build type: native build 00:12:08.215 Program cat found: YES (/bin/cat) 00:12:08.215 Project name: DPDK 00:12:08.215 Project version: 23.11.0 00:12:08.215 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:12:08.215 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:12:08.215 Host machine cpu family: x86_64 00:12:08.215 Host machine cpu: x86_64 00:12:08.215 Message: ## Building in Developer Mode ## 00:12:08.215 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:12:08.215 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:08.215 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:08.215 Program python3 found: YES (/usr/local/bin/python3.9) 00:12:08.215 Program cat found: YES (/bin/cat) 00:12:08.215 Compiler for C supports arguments -march=native: YES 00:12:08.215 Checking for size of "void *" : 8 00:12:08.215 Checking for size of "void *" : 8 (cached) 00:12:08.215 Library m found: YES 00:12:08.215 Library numa found: NO 00:12:08.215 Library fdt found: NO 00:12:08.215 Library execinfo found: YES 00:12:08.215 Has header "execinfo.h" : YES 00:12:08.215 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:12:08.215 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:08.215 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:08.215 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:08.215 Run-time dependency openssl found: YES 3.0.13 00:12:08.215 Run-time dependency libpcap found: NO (tried pkgconfig) 00:12:08.215 Library pcap found: YES 00:12:08.215 Has header "pcap.h" with dependency -lpcap: YES 00:12:08.215 Compiler for C supports arguments -Wcast-qual: YES 00:12:08.215 Compiler for C supports arguments -Wdeprecated: YES 00:12:08.215 Compiler for C supports arguments -Wformat: YES 00:12:08.215 Compiler for C supports arguments -Wformat-nonliteral: YES 00:12:08.215 Compiler for C supports arguments -Wformat-security: YES 00:12:08.215 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:08.215 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:08.215 Compiler for C supports arguments -Wnested-externs: YES 00:12:08.215 Compiler for C supports arguments -Wold-style-definition: YES 00:12:08.215 Compiler for C supports arguments -Wpointer-arith: YES 00:12:08.215 Compiler for C supports arguments -Wsign-compare: YES 00:12:08.215 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:08.215 Compiler for C supports arguments -Wundef: YES 00:12:08.215 Compiler for C supports arguments -Wwrite-strings: YES 00:12:08.215 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:08.215 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:12:08.215 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:08.215 Compiler for C supports arguments -mavx512f: YES 00:12:08.215 Checking if "AVX512 checking" compiles: YES 00:12:08.215 Fetching value of define "__SSE4_2__" : 1 00:12:08.215 Fetching value of define "__AES__" : 1 00:12:08.215 Fetching value of define "__AVX__" : 1 00:12:08.215 Fetching value of define "__AVX2__" : 1 00:12:08.215 Fetching value of define "__AVX512BW__" : 1 00:12:08.215 Fetching value of define "__AVX512CD__" : 1 00:12:08.215 Fetching value of define "__AVX512DQ__" : 1 00:12:08.215 Fetching value of define "__AVX512F__" : 1 00:12:08.215 Fetching value of define "__AVX512VL__" : 1 00:12:08.215 Fetching value of define "__PCLMUL__" : 1 00:12:08.215 Fetching value of define "__RDRND__" : 1 00:12:08.215 Fetching value of define "__RDSEED__" : 1 00:12:08.215 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:08.215 Fetching value of define "__znver1__" : (undefined) 00:12:08.215 Fetching value of define "__znver2__" : (undefined) 00:12:08.215 Fetching value of define "__znver3__" : (undefined) 00:12:08.215 Fetching value of define "__znver4__" : (undefined) 00:12:08.215 Compiler for C supports arguments -Wno-format-truncation: NO 00:12:08.215 Message: lib/log: Defining dependency "log" 00:12:08.215 Message: lib/kvargs: Defining dependency "kvargs" 00:12:08.215 Message: lib/telemetry: Defining dependency "telemetry" 00:12:08.215 Checking if "Detect argument count for CPU_OR" compiles: YES 00:12:08.215 Checking for function "getentropy" : YES 00:12:08.215 Message: lib/eal: Defining dependency "eal" 00:12:08.215 Message: lib/ring: Defining dependency "ring" 00:12:08.215 Message: lib/rcu: Defining dependency "rcu" 00:12:08.215 Message: lib/mempool: Defining dependency "mempool" 00:12:08.215 Message: lib/mbuf: Defining dependency "mbuf" 00:12:08.215 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:08.215 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:08.215 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:08.215 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:08.215 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:08.215 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:12:08.215 Compiler for C supports arguments -mpclmul: YES 00:12:08.215 Compiler for C supports arguments -maes: YES 00:12:08.215 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:08.215 Compiler for C supports arguments -mavx512bw: YES 00:12:08.215 Compiler for C supports arguments -mavx512dq: YES 00:12:08.215 Compiler for C supports arguments -mavx512vl: YES 00:12:08.215 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:08.215 Compiler for C supports arguments -mavx2: YES 00:12:08.215 Compiler for C supports arguments -mavx: YES 00:12:08.215 Message: lib/net: Defining dependency "net" 00:12:08.215 Message: lib/meter: Defining dependency "meter" 00:12:08.215 Message: lib/ethdev: Defining dependency "ethdev" 00:12:08.215 Message: lib/pci: Defining dependency "pci" 00:12:08.215 Message: lib/cmdline: Defining dependency "cmdline" 00:12:08.215 Message: lib/hash: Defining dependency "hash" 00:12:08.215 Message: lib/timer: Defining dependency "timer" 00:12:08.215 Message: lib/compressdev: Defining dependency "compressdev" 00:12:08.215 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:08.215 Message: lib/dmadev: Defining dependency "dmadev" 00:12:08.215 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:08.215 Message: lib/reorder: Defining dependency "reorder" 00:12:08.215 Message: lib/security: Defining dependency "security" 00:12:08.215 Has header "linux/userfaultfd.h" : NO 00:12:08.215 Has header "linux/vduse.h" : NO 00:12:08.215 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:12:08.215 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:08.215 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:08.215 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:08.215 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:08.215 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:08.215 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:08.215 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:12:08.215 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:08.215 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:08.215 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:08.215 Program doxygen found: YES (/usr/local/bin/doxygen) 00:12:08.215 Configuring doxy-api-html.conf using configuration 00:12:08.215 Configuring doxy-api-man.conf using configuration 00:12:08.215 Program mandb found: NO 00:12:08.215 Program sphinx-build found: NO 00:12:08.215 Configuring rte_build_config.h using configuration 00:12:08.215 Message: 00:12:08.215 ================= 00:12:08.215 Applications Enabled 00:12:08.215 ================= 00:12:08.215 00:12:08.215 apps: 00:12:08.215 00:12:08.215 00:12:08.215 Message: 00:12:08.215 ================= 00:12:08.215 Libraries Enabled 00:12:08.215 ================= 00:12:08.215 00:12:08.215 libs: 00:12:08.215 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:08.215 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:08.215 cryptodev, dmadev, reorder, security, 00:12:08.215 00:12:08.215 Message: 00:12:08.215 =============== 00:12:08.215 Drivers Enabled 00:12:08.215 =============== 00:12:08.215 00:12:08.215 common: 00:12:08.215 00:12:08.215 bus: 00:12:08.216 pci, vdev, 00:12:08.216 mempool: 00:12:08.216 ring, 00:12:08.216 dma: 00:12:08.216 00:12:08.216 net: 00:12:08.216 00:12:08.216 crypto: 00:12:08.216 00:12:08.216 compress: 00:12:08.216 00:12:08.216 00:12:08.216 Message: 00:12:08.216 ================= 00:12:08.216 Content Skipped 00:12:08.216 ================= 00:12:08.216 00:12:08.216 apps: 00:12:08.216 dumpcap: explicitly disabled via build config 00:12:08.216 graph: explicitly disabled via build config 00:12:08.216 pdump: explicitly disabled via build config 00:12:08.216 proc-info: explicitly disabled via build config 00:12:08.216 test-acl: explicitly disabled via build config 00:12:08.216 test-bbdev: explicitly disabled via build config 00:12:08.216 test-cmdline: explicitly disabled via build config 00:12:08.216 test-compress-perf: explicitly disabled via build config 00:12:08.216 test-crypto-perf: explicitly disabled via build config 00:12:08.216 test-dma-perf: explicitly disabled via build config 00:12:08.216 test-eventdev: explicitly disabled via build config 00:12:08.216 test-fib: explicitly disabled via build config 00:12:08.216 test-flow-perf: explicitly disabled via build config 00:12:08.216 test-gpudev: explicitly disabled via build config 00:12:08.216 test-mldev: explicitly disabled via build config 00:12:08.216 test-pipeline: explicitly disabled via build config 00:12:08.216 test-pmd: explicitly disabled via build config 00:12:08.216 test-regex: explicitly disabled via build config 00:12:08.216 test-sad: explicitly disabled via build config 00:12:08.216 test-security-perf: explicitly disabled via build config 00:12:08.216 00:12:08.216 libs: 00:12:08.216 metrics: explicitly disabled via build config 00:12:08.216 acl: explicitly disabled via build config 00:12:08.216 bbdev: explicitly disabled via build config 00:12:08.216 bitratestats: explicitly disabled via build config 00:12:08.216 bpf: explicitly disabled via build config 00:12:08.216 cfgfile: explicitly disabled via build config 00:12:08.216 distributor: explicitly disabled via build config 00:12:08.216 efd: explicitly disabled via build config 00:12:08.216 eventdev: explicitly disabled via build config 00:12:08.216 dispatcher: explicitly disabled via build config 00:12:08.216 gpudev: explicitly disabled via build config 00:12:08.216 gro: explicitly disabled via build config 00:12:08.216 gso: explicitly disabled via build config 00:12:08.216 ip_frag: explicitly disabled via build config 00:12:08.216 jobstats: explicitly disabled via build config 00:12:08.216 latencystats: explicitly disabled via build config 00:12:08.216 lpm: explicitly disabled via build config 00:12:08.216 member: explicitly disabled via build config 00:12:08.216 pcapng: explicitly disabled via build config 00:12:08.216 power: only supported on Linux 00:12:08.216 rawdev: explicitly disabled via build config 00:12:08.216 regexdev: explicitly disabled via build config 00:12:08.216 mldev: explicitly disabled via build config 00:12:08.216 rib: explicitly disabled via build config 00:12:08.216 sched: explicitly disabled via build config 00:12:08.216 stack: explicitly disabled via build config 00:12:08.216 vhost: only supported on Linux 00:12:08.216 ipsec: explicitly disabled via build config 00:12:08.216 pdcp: explicitly disabled via build config 00:12:08.216 fib: explicitly disabled via build config 00:12:08.216 port: explicitly disabled via build config 00:12:08.216 pdump: explicitly disabled via build config 00:12:08.216 table: explicitly disabled via build config 00:12:08.216 pipeline: explicitly disabled via build config 00:12:08.216 graph: explicitly disabled via build config 00:12:08.216 node: explicitly disabled via build config 00:12:08.216 00:12:08.216 drivers: 00:12:08.216 common/cpt: not in enabled drivers build config 00:12:08.216 common/dpaax: not in enabled drivers build config 00:12:08.216 common/iavf: not in enabled drivers build config 00:12:08.216 common/idpf: not in enabled drivers build config 00:12:08.216 common/mvep: not in enabled drivers build config 00:12:08.216 common/octeontx: not in enabled drivers build config 00:12:08.216 bus/auxiliary: not in enabled drivers build config 00:12:08.216 bus/cdx: not in enabled drivers build config 00:12:08.216 bus/dpaa: not in enabled drivers build config 00:12:08.216 bus/fslmc: not in enabled drivers build config 00:12:08.216 bus/ifpga: not in enabled drivers build config 00:12:08.216 bus/platform: not in enabled drivers build config 00:12:08.216 bus/vmbus: not in enabled drivers build config 00:12:08.216 common/cnxk: not in enabled drivers build config 00:12:08.216 common/mlx5: not in enabled drivers build config 00:12:08.216 common/nfp: not in enabled drivers build config 00:12:08.216 common/qat: not in enabled drivers build config 00:12:08.216 common/sfc_efx: not in enabled drivers build config 00:12:08.216 mempool/bucket: not in enabled drivers build config 00:12:08.216 mempool/cnxk: not in enabled drivers build config 00:12:08.216 mempool/dpaa: not in enabled drivers build config 00:12:08.216 mempool/dpaa2: not in enabled drivers build config 00:12:08.216 mempool/octeontx: not in enabled drivers build config 00:12:08.216 mempool/stack: not in enabled drivers build config 00:12:08.216 dma/cnxk: not in enabled drivers build config 00:12:08.216 dma/dpaa: not in enabled drivers build config 00:12:08.216 dma/dpaa2: not in enabled drivers build config 00:12:08.216 dma/hisilicon: not in enabled drivers build config 00:12:08.216 dma/idxd: not in enabled drivers build config 00:12:08.216 dma/ioat: not in enabled drivers build config 00:12:08.216 dma/skeleton: not in enabled drivers build config 00:12:08.216 net/af_packet: not in enabled drivers build config 00:12:08.216 net/af_xdp: not in enabled drivers build config 00:12:08.216 net/ark: not in enabled drivers build config 00:12:08.216 net/atlantic: not in enabled drivers build config 00:12:08.216 net/avp: not in enabled drivers build config 00:12:08.216 net/axgbe: not in enabled drivers build config 00:12:08.216 net/bnx2x: not in enabled drivers build config 00:12:08.216 net/bnxt: not in enabled drivers build config 00:12:08.216 net/bonding: not in enabled drivers build config 00:12:08.216 net/cnxk: not in enabled drivers build config 00:12:08.216 net/cpfl: not in enabled drivers build config 00:12:08.216 net/cxgbe: not in enabled drivers build config 00:12:08.216 net/dpaa: not in enabled drivers build config 00:12:08.216 net/dpaa2: not in enabled drivers build config 00:12:08.216 net/e1000: not in enabled drivers build config 00:12:08.216 net/ena: not in enabled drivers build config 00:12:08.216 net/enetc: not in enabled drivers build config 00:12:08.216 net/enetfec: not in enabled drivers build config 00:12:08.216 net/enic: not in enabled drivers build config 00:12:08.216 net/failsafe: not in enabled drivers build config 00:12:08.216 net/fm10k: not in enabled drivers build config 00:12:08.216 net/gve: not in enabled drivers build config 00:12:08.216 net/hinic: not in enabled drivers build config 00:12:08.216 net/hns3: not in enabled drivers build config 00:12:08.216 net/i40e: not in enabled drivers build config 00:12:08.216 net/iavf: not in enabled drivers build config 00:12:08.216 net/ice: not in enabled drivers build config 00:12:08.216 net/idpf: not in enabled drivers build config 00:12:08.216 net/igc: not in enabled drivers build config 00:12:08.216 net/ionic: not in enabled drivers build config 00:12:08.216 net/ipn3ke: not in enabled drivers build config 00:12:08.216 net/ixgbe: not in enabled drivers build config 00:12:08.216 net/mana: not in enabled drivers build config 00:12:08.216 net/memif: not in enabled drivers build config 00:12:08.216 net/mlx4: not in enabled drivers build config 00:12:08.216 net/mlx5: not in enabled drivers build config 00:12:08.216 net/mvneta: not in enabled drivers build config 00:12:08.216 net/mvpp2: not in enabled drivers build config 00:12:08.216 net/netvsc: not in enabled drivers build config 00:12:08.216 net/nfb: not in enabled drivers build config 00:12:08.216 net/nfp: not in enabled drivers build config 00:12:08.216 net/ngbe: not in enabled drivers build config 00:12:08.216 net/null: not in enabled drivers build config 00:12:08.216 net/octeontx: not in enabled drivers build config 00:12:08.216 net/octeon_ep: not in enabled drivers build config 00:12:08.216 net/pcap: not in enabled drivers build config 00:12:08.216 net/pfe: not in enabled drivers build config 00:12:08.216 net/qede: not in enabled drivers build config 00:12:08.216 net/ring: not in enabled drivers build config 00:12:08.216 net/sfc: not in enabled drivers build config 00:12:08.216 net/softnic: not in enabled drivers build config 00:12:08.216 net/tap: not in enabled drivers build config 00:12:08.216 net/thunderx: not in enabled drivers build config 00:12:08.216 net/txgbe: not in enabled drivers build config 00:12:08.216 net/vdev_netvsc: not in enabled drivers build config 00:12:08.216 net/vhost: not in enabled drivers build config 00:12:08.216 net/virtio: not in enabled drivers build config 00:12:08.216 net/vmxnet3: not in enabled drivers build config 00:12:08.216 raw/*: missing internal dependency, "rawdev" 00:12:08.216 crypto/armv8: not in enabled drivers build config 00:12:08.216 crypto/bcmfs: not in enabled drivers build config 00:12:08.216 crypto/caam_jr: not in enabled drivers build config 00:12:08.216 crypto/ccp: not in enabled drivers build config 00:12:08.216 crypto/cnxk: not in enabled drivers build config 00:12:08.216 crypto/dpaa_sec: not in enabled drivers build config 00:12:08.216 crypto/dpaa2_sec: not in enabled drivers build config 00:12:08.216 crypto/ipsec_mb: not in enabled drivers build config 00:12:08.216 crypto/mlx5: not in enabled drivers build config 00:12:08.216 crypto/mvsam: not in enabled drivers build config 00:12:08.216 crypto/nitrox: not in enabled drivers build config 00:12:08.216 crypto/null: not in enabled drivers build config 00:12:08.216 crypto/octeontx: not in enabled drivers build config 00:12:08.216 crypto/openssl: not in enabled drivers build config 00:12:08.216 crypto/scheduler: not in enabled drivers build config 00:12:08.216 crypto/uadk: not in enabled drivers build config 00:12:08.216 crypto/virtio: not in enabled drivers build config 00:12:08.216 compress/isal: not in enabled drivers build config 00:12:08.216 compress/mlx5: not in enabled drivers build config 00:12:08.216 compress/octeontx: not in enabled drivers build config 00:12:08.216 compress/zlib: not in enabled drivers build config 00:12:08.216 regex/*: missing internal dependency, "regexdev" 00:12:08.216 ml/*: missing internal dependency, "mldev" 00:12:08.216 vdpa/*: missing internal dependency, "vhost" 00:12:08.216 event/*: missing internal dependency, "eventdev" 00:12:08.216 baseband/*: missing internal dependency, "bbdev" 00:12:08.216 gpu/*: missing internal dependency, "gpudev" 00:12:08.216 00:12:08.216 00:12:08.216 Build targets in project: 81 00:12:08.216 00:12:08.216 DPDK 23.11.0 00:12:08.216 00:12:08.216 User defined options 00:12:08.216 default_library : static 00:12:08.216 libdir : lib 00:12:08.216 prefix : / 00:12:08.217 c_args : -fPIC -Werror 00:12:08.217 c_link_args : 00:12:08.217 cpu_instruction_set: native 00:12:08.217 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:08.217 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:08.217 enable_docs : false 00:12:08.217 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:08.217 enable_kmods : true 00:12:08.217 tests : false 00:12:08.217 00:12:08.217 Found ninja-1.11.1 at /usr/local/bin/ninja 00:12:08.217 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:08.217 [1/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:08.217 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:08.217 [3/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:12:08.217 [4/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:08.217 [5/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:08.217 [6/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:08.217 [7/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:08.217 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:08.217 [9/231] Linking static target lib/librte_log.a 00:12:08.217 [10/231] Linking static target lib/librte_kvargs.a 00:12:08.217 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:08.217 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:08.217 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:08.217 [14/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:08.217 [15/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:08.217 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:08.217 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:08.217 [18/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:08.217 [19/231] Linking static target lib/librte_telemetry.a 00:12:08.217 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:08.217 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:08.474 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:08.474 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:08.474 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:08.474 [25/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:08.474 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:08.474 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:08.474 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:08.474 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:08.731 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:08.731 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:08.731 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:08.731 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:08.731 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:08.731 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:08.731 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:08.990 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:08.990 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:08.990 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:08.990 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:08.990 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:08.990 [42/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:08.990 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:09.249 [44/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:09.249 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:09.249 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:09.249 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:09.249 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:09.249 [49/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:09.249 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:09.249 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:12:09.249 [52/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:09.249 [53/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:12:09.249 [54/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:09.249 [55/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:09.509 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:09.509 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:09.509 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:09.509 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:09.509 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:09.509 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:12:09.509 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:12:09.509 [63/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:09.509 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:12:09.509 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:09.509 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:12:09.768 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:12:09.768 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:12:09.768 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:12:09.768 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:12:09.768 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:12:09.768 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:10.028 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:10.028 [74/231] Linking static target lib/librte_eal.a 00:12:10.028 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:10.028 [76/231] Linking static target lib/librte_ring.a 00:12:10.028 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:10.028 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:10.028 [79/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:10.028 [80/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:10.028 [81/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:10.028 [82/231] Linking target lib/librte_log.so.24.0 00:12:10.028 [83/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:10.287 [84/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:10.287 [85/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:10.287 [86/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:12:10.287 [87/231] Linking static target lib/librte_mempool.a 00:12:10.287 [88/231] Linking target lib/librte_telemetry.so.24.0 00:12:10.287 [89/231] Linking target lib/librte_kvargs.so.24.0 00:12:10.547 [90/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:10.547 [91/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:12:10.547 [92/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:12:10.547 [93/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:10.547 [94/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:10.547 [95/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:10.547 [96/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:10.547 [97/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:10.547 [98/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:10.547 [99/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:10.547 [100/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:10.547 [101/231] Linking static target lib/librte_mbuf.a 00:12:10.807 [102/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:10.807 [103/231] Linking static target lib/librte_rcu.a 00:12:10.807 [104/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:10.807 [105/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:10.807 [106/231] Linking static target lib/librte_meter.a 00:12:10.807 [107/231] Linking static target lib/librte_net.a 00:12:11.066 [108/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:11.066 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:11.066 [110/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:11.066 [111/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:11.066 [112/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:11.066 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:11.066 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:11.325 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:11.325 [116/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:11.584 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:11.584 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:11.584 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:11.584 [120/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:11.584 [121/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:11.584 [122/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:11.584 [123/231] Linking static target lib/librte_pci.a 00:12:11.584 [124/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:11.584 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:11.584 [126/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:11.843 [127/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:11.843 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:11.843 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:11.843 [130/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:11.843 [131/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:11.843 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:11.843 [133/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:11.843 [134/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:11.843 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:11.843 [136/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:11.843 [137/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:11.843 [138/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:12.102 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:12.102 [140/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:12.102 [141/231] Linking static target lib/librte_cmdline.a 00:12:12.102 [142/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:12.361 [143/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:12.361 [144/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:12.361 [145/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:12.361 [146/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:12.361 [147/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:12.361 [148/231] Linking static target lib/librte_timer.a 00:12:12.620 [149/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:12.620 [150/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:12.620 [151/231] Linking static target lib/librte_compressdev.a 00:12:12.620 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:12.620 [153/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:12.620 [154/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:12.879 [155/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:12.879 [156/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:12.879 [157/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:12.879 [158/231] Linking static target lib/librte_dmadev.a 00:12:13.139 [159/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:13.139 [160/231] Linking static target lib/librte_reorder.a 00:12:13.139 [161/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:13.139 [162/231] Linking static target lib/librte_security.a 00:12:13.139 [163/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:13.139 [164/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:13.139 [165/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.139 [166/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:13.139 [167/231] Linking static target lib/librte_hash.a 00:12:13.139 [168/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:13.139 [169/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:12:13.139 [170/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.139 [171/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:13.399 [172/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.399 [173/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.399 [174/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.399 [175/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:13.399 [176/231] Linking static target lib/librte_ethdev.a 00:12:13.399 [177/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:13.399 [178/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:13.399 [179/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:13.399 [180/231] Linking static target drivers/librte_bus_pci.a 00:12:13.399 [181/231] Generating kernel/freebsd/contigmem with a custom command 00:12:13.399 machine -> /usr/src/sys/amd64/include 00:12:13.399 x86 -> /usr/src/sys/x86/include 00:12:13.399 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:12:13.399 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:12:13.399 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:12:13.399 touch opt_global.h 00:12:13.399 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:12:13.399 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:12:13.399 :> export_syms 00:12:13.399 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:12:13.399 objcopy --strip-debug contigmem.ko 00:12:13.399 [182/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.669 [183/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:13.669 [184/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:13.669 [185/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:13.669 [186/231] Linking static target lib/librte_cryptodev.a 00:12:13.669 [187/231] Generating kernel/freebsd/nic_uio with a custom command 00:12:13.669 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:12:13.669 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:12:13.669 :> export_syms 00:12:13.669 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:12:13.669 objcopy --strip-debug nic_uio.ko 00:12:13.669 [188/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.669 [189/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:13.669 [190/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:13.669 [191/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:13.669 [192/231] Linking static target drivers/librte_bus_vdev.a 00:12:13.927 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.186 [194/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:14.186 [195/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:14.445 [196/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:14.445 [197/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:14.445 [198/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:14.445 [199/231] Linking static target drivers/librte_mempool_ring.a 00:12:15.381 [200/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:22.050 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:23.430 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:23.688 [203/231] Linking target lib/librte_eal.so.24.0 00:12:23.688 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:12:23.688 [205/231] Linking target drivers/librte_bus_vdev.so.24.0 00:12:23.688 [206/231] Linking target lib/librte_meter.so.24.0 00:12:23.688 [207/231] Linking target lib/librte_ring.so.24.0 00:12:23.688 [208/231] Linking target lib/librte_dmadev.so.24.0 00:12:23.688 [209/231] Linking target lib/librte_timer.so.24.0 00:12:23.688 [210/231] Linking target lib/librte_pci.so.24.0 00:12:23.946 [211/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:12:23.946 [212/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:12:23.946 [213/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:12:23.946 [214/231] Linking target lib/librte_rcu.so.24.0 00:12:23.946 [215/231] Linking target drivers/librte_bus_pci.so.24.0 00:12:23.946 [216/231] Linking target lib/librte_mempool.so.24.0 00:12:23.946 [217/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:12:23.946 [218/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:12:24.204 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:12:24.204 [220/231] Linking target lib/librte_mbuf.so.24.0 00:12:24.204 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:12:24.204 [222/231] Linking target lib/librte_compressdev.so.24.0 00:12:24.204 [223/231] Linking target lib/librte_reorder.so.24.0 00:12:24.204 [224/231] Linking target lib/librte_net.so.24.0 00:12:24.204 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:12:24.463 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:12:24.463 [227/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:12:24.463 [228/231] Linking target lib/librte_cmdline.so.24.0 00:12:24.463 [229/231] Linking target lib/librte_hash.so.24.0 00:12:24.463 [230/231] Linking target lib/librte_security.so.24.0 00:12:24.463 [231/231] Linking target lib/librte_ethdev.so.24.0 00:12:24.463 INFO: autodetecting backend as ninja 00:12:24.463 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:25.398 CC lib/ut_mock/mock.o 00:12:25.398 CC lib/ut/ut.o 00:12:25.398 CC lib/log/log.o 00:12:25.398 CC lib/log/log_flags.o 00:12:25.398 CC lib/log/log_deprecated.o 00:12:25.398 LIB libspdk_ut_mock.a 00:12:25.398 LIB libspdk_ut.a 00:12:25.398 LIB libspdk_log.a 00:12:25.398 CC lib/util/base64.o 00:12:25.398 CC lib/util/bit_array.o 00:12:25.398 CC lib/util/cpuset.o 00:12:25.398 CC lib/util/crc32.o 00:12:25.398 CC lib/util/crc16.o 00:12:25.398 CC lib/util/crc32c.o 00:12:25.398 CC lib/util/crc32_ieee.o 00:12:25.398 CXX lib/trace_parser/trace.o 00:12:25.398 CC lib/dma/dma.o 00:12:25.398 CC lib/ioat/ioat.o 00:12:25.398 CC lib/util/crc64.o 00:12:25.398 CC lib/util/dif.o 00:12:25.398 CC lib/util/fd.o 00:12:25.655 LIB libspdk_dma.a 00:12:25.655 CC lib/util/file.o 00:12:25.655 CC lib/util/hexlify.o 00:12:25.655 CC lib/util/iov.o 00:12:25.655 CC lib/util/math.o 00:12:25.655 CC lib/util/pipe.o 00:12:25.655 CC lib/util/strerror_tls.o 00:12:25.655 CC lib/util/string.o 00:12:25.655 CC lib/util/uuid.o 00:12:25.655 LIB libspdk_ioat.a 00:12:25.655 CC lib/util/fd_group.o 00:12:25.655 CC lib/util/xor.o 00:12:25.655 CC lib/util/zipf.o 00:12:26.229 LIB libspdk_util.a 00:12:26.229 LIB libspdk_trace_parser.a 00:12:26.229 CC lib/rdma/common.o 00:12:26.229 CC lib/rdma/rdma_verbs.o 00:12:26.229 CC lib/vmd/vmd.o 00:12:26.229 CC lib/vmd/led.o 00:12:26.229 CC lib/env_dpdk/env.o 00:12:26.229 CC lib/env_dpdk/memory.o 00:12:26.230 CC lib/env_dpdk/pci.o 00:12:26.230 CC lib/conf/conf.o 00:12:26.230 CC lib/idxd/idxd.o 00:12:26.230 CC lib/json/json_parse.o 00:12:26.230 CC lib/json/json_util.o 00:12:26.230 CC lib/env_dpdk/init.o 00:12:26.489 LIB libspdk_rdma.a 00:12:26.489 CC lib/idxd/idxd_user.o 00:12:26.489 LIB libspdk_conf.a 00:12:26.489 CC lib/json/json_write.o 00:12:26.489 CC lib/env_dpdk/threads.o 00:12:26.489 CC lib/env_dpdk/pci_ioat.o 00:12:26.489 CC lib/env_dpdk/pci_virtio.o 00:12:26.489 CC lib/env_dpdk/pci_vmd.o 00:12:26.489 CC lib/env_dpdk/pci_idxd.o 00:12:26.489 CC lib/env_dpdk/pci_event.o 00:12:26.489 CC lib/env_dpdk/sigbus_handler.o 00:12:26.489 CC lib/env_dpdk/pci_dpdk.o 00:12:26.489 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:26.489 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:26.489 LIB libspdk_idxd.a 00:12:26.489 LIB libspdk_vmd.a 00:12:26.747 LIB libspdk_json.a 00:12:26.747 CC lib/jsonrpc/jsonrpc_server.o 00:12:26.747 CC lib/jsonrpc/jsonrpc_client.o 00:12:26.747 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:26.747 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:27.005 LIB libspdk_jsonrpc.a 00:12:27.005 LIB libspdk_env_dpdk.a 00:12:27.263 CC lib/rpc/rpc.o 00:12:27.263 LIB libspdk_rpc.a 00:12:27.521 CC lib/trace/trace.o 00:12:27.521 CC lib/trace/trace_flags.o 00:12:27.521 CC lib/trace/trace_rpc.o 00:12:27.521 CC lib/notify/notify.o 00:12:27.521 CC lib/notify/notify_rpc.o 00:12:27.521 CC lib/sock/sock.o 00:12:27.521 CC lib/sock/sock_rpc.o 00:12:27.521 LIB libspdk_notify.a 00:12:27.521 LIB libspdk_trace.a 00:12:27.521 LIB libspdk_sock.a 00:12:27.778 CC lib/thread/thread.o 00:12:27.778 CC lib/thread/iobuf.o 00:12:27.778 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:27.778 CC lib/nvme/nvme_fabric.o 00:12:27.778 CC lib/nvme/nvme_ctrlr.o 00:12:27.778 CC lib/nvme/nvme_ns.o 00:12:27.778 CC lib/nvme/nvme_ns_cmd.o 00:12:27.778 CC lib/nvme/nvme_pcie_common.o 00:12:27.778 CC lib/nvme/nvme_qpair.o 00:12:27.778 CC lib/nvme/nvme_pcie.o 00:12:28.037 CC lib/nvme/nvme.o 00:12:28.295 CC lib/nvme/nvme_quirks.o 00:12:28.295 CC lib/nvme/nvme_transport.o 00:12:28.295 CC lib/nvme/nvme_discovery.o 00:12:28.295 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:28.295 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:28.295 CC lib/nvme/nvme_tcp.o 00:12:28.295 LIB libspdk_thread.a 00:12:28.295 CC lib/nvme/nvme_opal.o 00:12:28.555 CC lib/accel/accel.o 00:12:28.555 CC lib/blob/blobstore.o 00:12:28.555 CC lib/accel/accel_rpc.o 00:12:28.555 CC lib/blob/request.o 00:12:28.555 CC lib/accel/accel_sw.o 00:12:28.555 CC lib/blob/zeroes.o 00:12:28.813 CC lib/blob/blob_bs_dev.o 00:12:28.813 CC lib/nvme/nvme_io_msg.o 00:12:28.813 CC lib/nvme/nvme_poll_group.o 00:12:28.813 CC lib/init/json_config.o 00:12:28.813 CC lib/nvme/nvme_zns.o 00:12:28.813 CC lib/init/subsystem.o 00:12:28.813 CC lib/nvme/nvme_cuse.o 00:12:28.813 CC lib/nvme/nvme_rdma.o 00:12:28.813 CC lib/init/subsystem_rpc.o 00:12:28.813 CC lib/init/rpc.o 00:12:29.071 LIB libspdk_init.a 00:12:29.071 LIB libspdk_accel.a 00:12:29.071 CC lib/event/app.o 00:12:29.071 CC lib/event/log_rpc.o 00:12:29.071 CC lib/event/reactor.o 00:12:29.071 CC lib/bdev/bdev.o 00:12:29.072 CC lib/event/app_rpc.o 00:12:29.072 CC lib/bdev/bdev_rpc.o 00:12:29.072 CC lib/bdev/bdev_zone.o 00:12:29.072 CC lib/bdev/part.o 00:12:29.330 CC lib/bdev/scsi_nvme.o 00:12:29.330 CC lib/event/scheduler_static.o 00:12:29.330 LIB libspdk_event.a 00:12:29.589 LIB libspdk_nvme.a 00:12:29.850 LIB libspdk_blob.a 00:12:30.108 CC lib/blobfs/tree.o 00:12:30.108 CC lib/blobfs/blobfs.o 00:12:30.108 CC lib/lvol/lvol.o 00:12:30.367 LIB libspdk_bdev.a 00:12:30.367 LIB libspdk_blobfs.a 00:12:30.367 LIB libspdk_lvol.a 00:12:30.367 CC lib/nvmf/ctrlr.o 00:12:30.367 CC lib/nvmf/ctrlr_bdev.o 00:12:30.367 CC lib/nvmf/ctrlr_discovery.o 00:12:30.367 CC lib/nvmf/nvmf.o 00:12:30.367 CC lib/nvmf/nvmf_rpc.o 00:12:30.367 CC lib/nvmf/subsystem.o 00:12:30.367 CC lib/nvmf/transport.o 00:12:30.367 CC lib/scsi/dev.o 00:12:30.367 CC lib/nvmf/tcp.o 00:12:30.367 CC lib/scsi/lun.o 00:12:30.625 CC lib/nvmf/rdma.o 00:12:30.625 CC lib/scsi/port.o 00:12:30.625 CC lib/scsi/scsi.o 00:12:30.885 CC lib/scsi/scsi_bdev.o 00:12:30.885 CC lib/scsi/scsi_pr.o 00:12:30.885 CC lib/scsi/scsi_rpc.o 00:12:30.885 CC lib/scsi/task.o 00:12:31.143 LIB libspdk_scsi.a 00:12:31.143 CC lib/iscsi/conn.o 00:12:31.143 CC lib/iscsi/init_grp.o 00:12:31.143 CC lib/iscsi/iscsi.o 00:12:31.143 CC lib/iscsi/md5.o 00:12:31.143 CC lib/iscsi/param.o 00:12:31.143 CC lib/iscsi/portal_grp.o 00:12:31.143 CC lib/iscsi/tgt_node.o 00:12:31.143 CC lib/iscsi/iscsi_rpc.o 00:12:31.143 CC lib/iscsi/iscsi_subsystem.o 00:12:31.402 CC lib/iscsi/task.o 00:12:31.402 LIB libspdk_nvmf.a 00:12:32.340 LIB libspdk_iscsi.a 00:12:32.599 CC module/env_dpdk/env_dpdk_rpc.o 00:12:32.599 CC module/accel/iaa/accel_iaa.o 00:12:32.599 CC module/accel/iaa/accel_iaa_rpc.o 00:12:32.599 CC module/blob/bdev/blob_bdev.o 00:12:32.599 CC module/accel/error/accel_error.o 00:12:32.599 CC module/accel/error/accel_error_rpc.o 00:12:32.599 CC module/accel/dsa/accel_dsa.o 00:12:32.599 CC module/accel/ioat/accel_ioat.o 00:12:32.599 CC module/sock/posix/posix.o 00:12:32.599 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:32.599 LIB libspdk_env_dpdk_rpc.a 00:12:32.599 CC module/accel/dsa/accel_dsa_rpc.o 00:12:32.599 CC module/accel/ioat/accel_ioat_rpc.o 00:12:32.599 LIB libspdk_accel_ioat.a 00:12:32.599 LIB libspdk_accel_error.a 00:12:32.599 LIB libspdk_accel_iaa.a 00:12:32.599 LIB libspdk_scheduler_dynamic.a 00:12:32.599 LIB libspdk_blob_bdev.a 00:12:32.599 LIB libspdk_accel_dsa.a 00:12:32.857 CC module/bdev/malloc/bdev_malloc.o 00:12:32.857 CC module/bdev/nvme/bdev_nvme.o 00:12:32.857 CC module/blobfs/bdev/blobfs_bdev.o 00:12:32.857 CC module/bdev/gpt/gpt.o 00:12:32.857 CC module/bdev/delay/vbdev_delay.o 00:12:32.857 CC module/bdev/lvol/vbdev_lvol.o 00:12:32.857 CC module/bdev/error/vbdev_error.o 00:12:32.857 CC module/bdev/null/bdev_null.o 00:12:32.857 CC module/bdev/passthru/vbdev_passthru.o 00:12:32.857 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:32.857 CC module/bdev/gpt/vbdev_gpt.o 00:12:32.857 LIB libspdk_sock_posix.a 00:12:32.857 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:32.857 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:32.857 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:33.114 LIB libspdk_blobfs_bdev.a 00:12:33.114 CC module/bdev/null/bdev_null_rpc.o 00:12:33.114 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:33.114 LIB libspdk_bdev_passthru.a 00:12:33.114 LIB libspdk_bdev_delay.a 00:12:33.114 LIB libspdk_bdev_gpt.a 00:12:33.114 CC module/bdev/error/vbdev_error_rpc.o 00:12:33.114 LIB libspdk_bdev_malloc.a 00:12:33.114 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:33.114 CC module/bdev/nvme/nvme_rpc.o 00:12:33.114 CC module/bdev/raid/bdev_raid.o 00:12:33.114 CC module/bdev/raid/bdev_raid_rpc.o 00:12:33.114 CC module/bdev/split/vbdev_split.o 00:12:33.114 LIB libspdk_bdev_error.a 00:12:33.114 LIB libspdk_bdev_null.a 00:12:33.114 CC module/bdev/raid/bdev_raid_sb.o 00:12:33.114 CC module/bdev/split/vbdev_split_rpc.o 00:12:33.114 CC module/bdev/nvme/bdev_mdns_client.o 00:12:33.114 LIB libspdk_bdev_lvol.a 00:12:33.372 CC module/bdev/raid/raid0.o 00:12:33.372 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:33.372 CC module/bdev/raid/raid1.o 00:12:33.372 LIB libspdk_bdev_split.a 00:12:33.372 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:33.372 CC module/bdev/raid/concat.o 00:12:33.372 CC module/bdev/aio/bdev_aio_rpc.o 00:12:33.372 CC module/bdev/aio/bdev_aio.o 00:12:33.372 LIB libspdk_bdev_zone_block.a 00:12:33.630 LIB libspdk_bdev_raid.a 00:12:33.630 LIB libspdk_bdev_aio.a 00:12:33.888 LIB libspdk_bdev_nvme.a 00:12:34.144 CC module/event/subsystems/vmd/vmd.o 00:12:34.144 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:34.144 CC module/event/subsystems/iobuf/iobuf.o 00:12:34.144 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:34.144 CC module/event/subsystems/sock/sock.o 00:12:34.144 CC module/event/subsystems/scheduler/scheduler.o 00:12:34.401 LIB libspdk_event_vmd.a 00:12:34.401 LIB libspdk_event_sock.a 00:12:34.401 LIB libspdk_event_scheduler.a 00:12:34.401 LIB libspdk_event_iobuf.a 00:12:34.401 CC module/event/subsystems/accel/accel.o 00:12:34.659 LIB libspdk_event_accel.a 00:12:34.659 CC module/event/subsystems/bdev/bdev.o 00:12:34.916 LIB libspdk_event_bdev.a 00:12:34.916 CC module/event/subsystems/scsi/scsi.o 00:12:34.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:34.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:35.175 LIB libspdk_event_scsi.a 00:12:35.175 LIB libspdk_event_nvmf.a 00:12:35.175 CC module/event/subsystems/iscsi/iscsi.o 00:12:35.433 LIB libspdk_event_iscsi.a 00:12:35.433 CXX app/trace/trace.o 00:12:35.433 TEST_HEADER include/spdk/config.h 00:12:35.433 CXX test/cpp_headers/accel.o 00:12:35.433 CC test/event/event_perf/event_perf.o 00:12:35.691 CC examples/accel/perf/accel_perf.o 00:12:35.691 CC test/accel/dif/dif.o 00:12:35.691 CC test/dma/test_dma/test_dma.o 00:12:35.691 CC test/bdev/bdevio/bdevio.o 00:12:35.691 CC test/env/mem_callbacks/mem_callbacks.o 00:12:35.691 CC test/blobfs/mkfs/mkfs.o 00:12:35.691 CC test/app/bdev_svc/bdev_svc.o 00:12:35.691 LINK event_perf 00:12:35.691 CXX test/cpp_headers/accel_module.o 00:12:35.691 LINK mkfs 00:12:35.691 LINK bdev_svc 00:12:35.691 LINK dif 00:12:35.691 CXX test/cpp_headers/assert.o 00:12:35.691 LINK test_dma 00:12:35.691 LINK accel_perf 00:12:35.949 LINK bdevio 00:12:35.949 CXX test/cpp_headers/barrier.o 00:12:35.949 LINK spdk_trace 00:12:35.949 CXX test/cpp_headers/base64.o 00:12:36.207 LINK mem_callbacks 00:12:36.207 CXX test/cpp_headers/bdev.o 00:12:36.207 CXX test/cpp_headers/bdev_module.o 00:12:36.466 CC test/env/vtophys/vtophys.o 00:12:36.466 CXX test/cpp_headers/bdev_zone.o 00:12:36.466 LINK vtophys 00:12:36.466 CXX test/cpp_headers/bit_array.o 00:12:36.724 CXX test/cpp_headers/bit_pool.o 00:12:36.724 CC app/trace_record/trace_record.o 00:12:36.982 CXX test/cpp_headers/blob.o 00:12:36.982 LINK spdk_trace_record 00:12:36.982 CXX test/cpp_headers/blob_bdev.o 00:12:37.240 CXX test/cpp_headers/blobfs.o 00:12:37.240 CC app/nvmf_tgt/nvmf_main.o 00:12:37.240 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:37.240 LINK nvmf_tgt 00:12:37.498 CXX test/cpp_headers/blobfs_bdev.o 00:12:37.498 LINK env_dpdk_post_init 00:12:37.498 CXX test/cpp_headers/conf.o 00:12:37.756 CXX test/cpp_headers/config.o 00:12:37.756 CXX test/cpp_headers/cpuset.o 00:12:37.756 CXX test/cpp_headers/crc16.o 00:12:37.756 CXX test/cpp_headers/crc32.o 00:12:38.013 CXX test/cpp_headers/crc64.o 00:12:38.270 CXX test/cpp_headers/dif.o 00:12:38.270 CXX test/cpp_headers/dma.o 00:12:38.528 CXX test/cpp_headers/endian.o 00:12:38.785 CXX test/cpp_headers/env.o 00:12:38.785 CXX test/cpp_headers/env_dpdk.o 00:12:39.043 CXX test/cpp_headers/event.o 00:12:39.301 CXX test/cpp_headers/fd.o 00:12:39.559 CXX test/cpp_headers/fd_group.o 00:12:39.559 CC test/event/reactor/reactor.o 00:12:39.559 CXX test/cpp_headers/file.o 00:12:39.559 LINK reactor 00:12:39.559 CXX test/cpp_headers/ftl.o 00:12:39.817 CXX test/cpp_headers/gpt_spec.o 00:12:40.074 CXX test/cpp_headers/hexlify.o 00:12:40.074 CXX test/cpp_headers/histogram_data.o 00:12:40.074 CXX test/cpp_headers/idxd.o 00:12:40.333 CXX test/cpp_headers/idxd_spec.o 00:12:40.333 CXX test/cpp_headers/init.o 00:12:40.590 CXX test/cpp_headers/ioat.o 00:12:40.590 CXX test/cpp_headers/ioat_spec.o 00:12:40.863 CXX test/cpp_headers/iscsi_spec.o 00:12:40.863 CXX test/cpp_headers/json.o 00:12:41.151 CXX test/cpp_headers/jsonrpc.o 00:12:41.151 CXX test/cpp_headers/likely.o 00:12:41.151 CXX test/cpp_headers/log.o 00:12:41.408 CXX test/cpp_headers/lvol.o 00:12:41.408 CXX test/cpp_headers/memory.o 00:12:41.666 CXX test/cpp_headers/mmio.o 00:12:41.666 CXX test/cpp_headers/nbd.o 00:12:41.666 CXX test/cpp_headers/notify.o 00:12:41.926 CXX test/cpp_headers/nvme.o 00:12:41.926 CXX test/cpp_headers/nvme_intel.o 00:12:42.183 CXX test/cpp_headers/nvme_ocssd.o 00:12:42.183 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:42.183 CC test/env/memory/memory_ut.o 00:12:42.441 CXX test/cpp_headers/nvme_spec.o 00:12:42.700 CC test/event/reactor_perf/reactor_perf.o 00:12:42.700 CXX test/cpp_headers/nvme_zns.o 00:12:42.700 LINK reactor_perf 00:12:42.700 CXX test/cpp_headers/nvmf.o 00:12:42.958 CC examples/bdev/hello_world/hello_bdev.o 00:12:42.958 LINK memory_ut 00:12:42.958 CXX test/cpp_headers/nvmf_cmd.o 00:12:42.958 LINK hello_bdev 00:12:43.215 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:43.215 CXX test/cpp_headers/nvmf_spec.o 00:12:43.473 CXX test/cpp_headers/nvmf_transport.o 00:12:43.473 CC test/env/pci/pci_ut.o 00:12:43.473 CXX test/cpp_headers/opal.o 00:12:43.741 LINK pci_ut 00:12:43.741 CXX test/cpp_headers/opal_spec.o 00:12:43.741 CXX test/cpp_headers/pci_ids.o 00:12:44.004 CXX test/cpp_headers/pipe.o 00:12:44.004 CC examples/blob/hello_world/hello_blob.o 00:12:44.004 CXX test/cpp_headers/queue.o 00:12:44.004 CXX test/cpp_headers/reduce.o 00:12:44.261 LINK hello_blob 00:12:44.261 CXX test/cpp_headers/rpc.o 00:12:44.519 CXX test/cpp_headers/scheduler.o 00:12:44.519 CXX test/cpp_headers/scsi.o 00:12:44.777 CXX test/cpp_headers/scsi_spec.o 00:12:45.035 CXX test/cpp_headers/sock.o 00:12:45.293 CXX test/cpp_headers/stdinc.o 00:12:45.293 CXX test/cpp_headers/string.o 00:12:45.550 CXX test/cpp_headers/thread.o 00:12:45.808 CXX test/cpp_headers/trace.o 00:12:45.808 CXX test/cpp_headers/trace_parser.o 00:12:46.068 CXX test/cpp_headers/tree.o 00:12:46.068 CXX test/cpp_headers/ublk.o 00:12:46.327 CXX test/cpp_headers/util.o 00:12:46.585 CC examples/ioat/perf/perf.o 00:12:46.585 CXX test/cpp_headers/uuid.o 00:12:46.585 LINK ioat_perf 00:12:46.843 CXX test/cpp_headers/version.o 00:12:46.843 CXX test/cpp_headers/vfio_user_pci.o 00:12:46.843 CXX test/cpp_headers/vfio_user_spec.o 00:12:47.162 CXX test/cpp_headers/vhost.o 00:12:47.421 CXX test/cpp_headers/vmd.o 00:12:47.421 CC examples/ioat/verify/verify.o 00:12:47.421 LINK verify 00:12:47.421 CXX test/cpp_headers/xor.o 00:12:47.678 CXX test/cpp_headers/zipf.o 00:12:47.678 CC examples/blob/cli/blobcli.o 00:12:47.936 LINK blobcli 00:12:48.194 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:48.760 LINK nvme_fuzz 00:12:56.878 CC examples/nvme/hello_world/hello_world.o 00:12:56.878 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:56.878 LINK hello_world 00:12:58.270 LINK iscsi_fuzz 00:12:58.270 CC test/app/histogram_perf/histogram_perf.o 00:12:58.270 LINK histogram_perf 00:12:58.527 CC examples/bdev/bdevperf/bdevperf.o 00:12:58.527 CC examples/nvme/reconnect/reconnect.o 00:12:58.784 LINK reconnect 00:12:59.042 CC test/app/jsoncat/jsoncat.o 00:12:59.042 LINK bdevperf 00:12:59.299 LINK jsoncat 00:12:59.558 CC examples/sock/hello_world/hello_sock.o 00:12:59.558 LINK hello_sock 00:12:59.815 gmake[2]: Nothing to be done for 'all'. 00:12:59.815 CC examples/vmd/lsvmd/lsvmd.o 00:13:00.073 LINK lsvmd 00:13:01.969 CC test/app/stub/stub.o 00:13:01.969 LINK stub 00:13:03.900 CC examples/vmd/led/led.o 00:13:03.900 LINK led 00:13:03.900 CC test/nvme/aer/aer.o 00:13:03.900 LINK aer 00:13:04.158 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:04.416 CC test/rpc_client/rpc_client_test.o 00:13:04.416 LINK rpc_client_test 00:13:04.416 LINK nvme_manage 00:13:04.983 CC app/iscsi_tgt/iscsi_tgt.o 00:13:05.241 LINK iscsi_tgt 00:13:05.241 CC test/nvme/reset/reset.o 00:13:05.241 LINK reset 00:13:06.178 CC examples/nvmf/nvmf/nvmf.o 00:13:06.178 LINK nvmf 00:13:07.557 CC test/nvme/sgl/sgl.o 00:13:07.557 CC app/spdk_tgt/spdk_tgt.o 00:13:07.557 LINK spdk_tgt 00:13:07.815 LINK sgl 00:13:08.382 CC examples/nvme/arbitration/arbitration.o 00:13:08.382 LINK arbitration 00:13:08.639 CC app/spdk_lspci/spdk_lspci.o 00:13:08.639 LINK spdk_lspci 00:13:10.010 CC examples/nvme/hotplug/hotplug.o 00:13:10.010 LINK hotplug 00:13:10.944 CC examples/util/zipf/zipf.o 00:13:10.944 LINK zipf 00:13:10.944 CC app/spdk_nvme_perf/perf.o 00:13:11.202 CC test/nvme/e2edp/nvme_dp.o 00:13:11.460 LINK nvme_dp 00:13:11.738 LINK spdk_nvme_perf 00:13:13.115 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:13.115 LINK cmb_copy 00:13:14.051 CC test/nvme/overhead/overhead.o 00:13:14.310 LINK overhead 00:13:15.246 CC test/thread/poller_perf/poller_perf.o 00:13:15.246 LINK poller_perf 00:13:15.246 CC app/spdk_nvme_identify/identify.o 00:13:16.643 LINK spdk_nvme_identify 00:13:16.904 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:13:17.163 LINK histogram_ut 00:13:17.422 CC test/unit/lib/accel/accel.c/accel_ut.o 00:13:18.359 CC examples/nvme/abort/abort.o 00:13:18.359 LINK abort 00:13:18.619 CC test/thread/lock/spdk_lock.o 00:13:18.878 CC test/nvme/err_injection/err_injection.o 00:13:18.878 LINK accel_ut 00:13:18.878 LINK err_injection 00:13:19.446 LINK spdk_lock 00:13:19.704 CC examples/thread/thread/thread_ex.o 00:13:19.705 LINK thread 00:13:21.171 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:13:21.739 CC app/spdk_nvme_discover/discovery_aer.o 00:13:21.739 LINK spdk_nvme_discover 00:13:22.676 CC test/unit/lib/bdev/part.c/part_ut.o 00:13:23.243 CC examples/idxd/perf/perf.o 00:13:23.502 LINK idxd_perf 00:13:24.070 CC test/nvme/startup/startup.o 00:13:24.070 LINK part_ut 00:13:24.070 LINK startup 00:13:24.637 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:13:24.896 LINK bdev_ut 00:13:24.896 LINK blob_bdev_ut 00:13:25.474 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:25.474 CC app/spdk_top/spdk_top.o 00:13:25.474 LINK pmr_persistence 00:13:25.474 CC test/unit/lib/blob/blob.c/blob_ut.o 00:13:26.412 LINK spdk_top 00:13:27.788 CC test/nvme/reserve/reserve.o 00:13:28.047 LINK reserve 00:13:28.047 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:13:28.305 LINK scsi_nvme_ut 00:13:28.564 CC test/nvme/simple_copy/simple_copy.o 00:13:28.823 LINK simple_copy 00:13:29.082 CC app/fio/nvme/fio_plugin.o 00:13:29.341 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:13:29.341 struct spdk_nvme_fdp_ruhs ruhs; 00:13:29.341 ^ 00:13:29.601 1 warning generated. 00:13:29.601 LINK spdk_nvme 00:13:30.538 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:13:30.538 LINK gpt_ut 00:13:31.105 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:13:31.687 LINK blob_ut 00:13:31.947 CC app/fio/bdev/fio_plugin.o 00:13:32.205 LINK vbdev_lvol_ut 00:13:32.205 LINK spdk_bdev 00:13:32.463 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:13:32.463 CC test/nvme/connect_stress/connect_stress.o 00:13:32.463 LINK tree_ut 00:13:32.721 LINK connect_stress 00:13:32.721 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:13:33.287 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:13:33.853 LINK blobfs_async_ut 00:13:34.420 CC test/unit/lib/dma/dma.c/dma_ut.o 00:13:34.420 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:13:34.679 LINK dma_ut 00:13:34.938 CC test/nvme/boot_partition/boot_partition.o 00:13:34.938 LINK boot_partition 00:13:35.197 LINK blobfs_sync_ut 00:13:35.764 LINK bdev_ut 00:13:36.699 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:13:36.958 LINK blobfs_bdev_ut 00:13:36.958 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:13:36.958 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:13:37.217 LINK bdev_zone_ut 00:13:37.477 CC test/unit/lib/event/app.c/app_ut.o 00:13:37.735 LINK app_ut 00:13:37.993 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:13:38.252 CC test/nvme/compliance/nvme_compliance.o 00:13:38.252 LINK bdev_raid_ut 00:13:38.252 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:13:38.511 LINK nvme_compliance 00:13:38.511 LINK ioat_ut 00:13:38.512 LINK reactor_ut 00:13:38.512 CC test/nvme/fused_ordering/fused_ordering.o 00:13:38.771 LINK fused_ordering 00:13:39.376 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:13:39.655 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:39.655 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:13:39.655 LINK vbdev_zone_block_ut 00:13:39.655 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:13:39.655 LINK doorbell_aers 00:13:39.915 LINK bdev_raid_sb_ut 00:13:39.915 LINK concat_ut 00:13:40.175 CC test/nvme/fdp/fdp.o 00:13:40.175 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:13:40.175 LINK fdp 00:13:40.434 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:13:40.693 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:13:40.950 LINK raid1_ut 00:13:41.517 LINK conn_ut 00:13:41.776 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:13:41.776 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:13:41.776 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:13:42.035 LINK jsonrpc_server_ut 00:13:42.293 LINK init_grp_ut 00:13:42.552 LINK bdev_nvme_ut 00:13:42.810 LINK json_parse_ut 00:13:42.810 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:13:42.810 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:13:43.070 LINK json_util_ut 00:13:43.327 CC test/unit/lib/log/log.c/log_ut.o 00:13:43.327 LINK log_ut 00:13:43.327 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:13:43.646 CC test/unit/lib/iscsi/param.c/param_ut.o 00:13:43.646 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:13:43.646 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:13:43.931 LINK param_ut 00:13:43.931 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:13:44.189 LINK portal_grp_ut 00:13:44.189 LINK iscsi_ut 00:13:44.445 LINK json_write_ut 00:13:44.445 LINK tgt_node_ut 00:13:44.445 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:13:44.445 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:13:44.445 CC test/unit/lib/notify/notify.c/notify_ut.o 00:13:44.703 LINK notify_ut 00:13:44.703 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:13:44.961 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:13:44.961 LINK lvol_ut 00:13:45.219 LINK nvme_ut 00:13:45.508 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:13:45.508 LINK nvme_ctrlr_cmd_ut 00:13:45.508 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:13:45.765 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:13:46.023 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:13:46.023 LINK nvme_ctrlr_ocssd_cmd_ut 00:13:46.023 CC test/unit/lib/sock/sock.c/sock_ut.o 00:13:46.281 LINK nvme_ctrlr_ut 00:13:46.281 LINK tcp_ut 00:13:46.281 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:13:46.281 LINK dev_ut 00:13:46.281 CC test/unit/lib/sock/posix.c/posix_ut.o 00:13:46.537 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:13:46.538 LINK scsi_ut 00:13:46.538 LINK ctrlr_ut 00:13:46.538 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:13:46.538 LINK lun_ut 00:13:46.794 CC test/unit/lib/thread/thread.c/thread_ut.o 00:13:46.794 LINK posix_ut 00:13:46.794 LINK subsystem_ut 00:13:47.053 LINK sock_ut 00:13:47.053 CC test/unit/lib/util/base64.c/base64_ut.o 00:13:47.053 LINK nvme_ns_ut 00:13:47.053 LINK base64_ut 00:13:47.312 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:13:47.312 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:13:47.312 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:13:47.571 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:13:47.571 LINK iobuf_ut 00:13:47.571 LINK bit_array_ut 00:13:47.571 LINK thread_ut 00:13:47.829 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:13:47.829 LINK scsi_bdev_ut 00:13:48.087 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:13:48.087 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:13:48.087 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:13:48.087 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:13:48.347 LINK crc16_ut 00:13:48.347 LINK scsi_pr_ut 00:13:48.347 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:13:48.347 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:13:48.347 LINK cpuset_ut 00:13:48.347 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:13:48.347 LINK pci_event_ut 00:13:48.605 LINK ctrlr_discovery_ut 00:13:48.605 LINK subsystem_ut 00:13:48.605 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:13:48.605 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:13:48.605 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:13:48.605 LINK ctrlr_bdev_ut 00:13:48.605 LINK crc32_ieee_ut 00:13:48.862 LINK nvme_ns_cmd_ut 00:13:48.862 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:13:48.862 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:13:48.862 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:13:48.862 LINK crc32c_ut 00:13:48.862 LINK crc64_ut 00:13:49.120 LINK nvmf_ut 00:13:49.120 LINK nvme_ns_ocssd_cmd_ut 00:13:49.120 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:13:49.120 CC test/unit/lib/util/dif.c/dif_ut.o 00:13:49.120 CC test/unit/lib/util/iov.c/iov_ut.o 00:13:49.120 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:13:49.380 LINK iov_ut 00:13:49.380 LINK rpc_ut 00:13:49.380 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:13:49.380 CC test/unit/lib/rdma/common.c/common_ut.o 00:13:49.380 LINK nvme_pcie_ut 00:13:49.640 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:13:49.640 CC test/unit/lib/util/math.c/math_ut.o 00:13:49.640 LINK idxd_user_ut 00:13:49.640 LINK math_ut 00:13:49.640 LINK common_ut 00:13:49.899 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:13:49.899 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:13:49.899 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:13:49.899 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:13:50.158 CC test/unit/lib/util/string.c/string_ut.o 00:13:50.158 LINK transport_ut 00:13:50.158 LINK pipe_ut 00:13:50.158 LINK idxd_ut 00:13:50.158 LINK rdma_ut 00:13:50.158 LINK nvme_poll_group_ut 00:13:50.158 LINK nvme_quirks_ut 00:13:50.416 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:13:50.416 LINK string_ut 00:13:50.416 CC test/unit/lib/util/xor.c/xor_ut.o 00:13:50.416 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:13:50.675 LINK nvme_qpair_ut 00:13:50.675 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:13:50.675 LINK xor_ut 00:13:50.675 LINK dif_ut 00:13:50.675 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:13:50.934 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:13:50.934 LINK nvme_transport_ut 00:13:50.934 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:13:50.934 LINK nvme_io_msg_ut 00:13:51.194 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:13:51.194 LINK nvme_pcie_common_ut 00:13:51.455 LINK nvme_fabric_ut 00:13:51.455 LINK nvme_opal_ut 00:13:51.455 LINK nvme_tcp_ut 00:13:52.023 LINK nvme_rdma_ut 00:13:55.315 13:37:34 -- spdk/autopackage.sh@44 -- $ gmake -j10 clean 00:13:55.572 gmake[1]: Nothing to be done for 'clean'. 00:13:55.572 ps: stdin: not a terminal 00:13:59.762 gmake[2]: Nothing to be done for 'clean'. 00:13:59.762 13:37:38 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:13:59.762 13:37:38 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:13:59.762 13:37:38 -- common/autotest_common.sh@10 -- $ set +x 00:13:59.762 13:37:38 -- spdk/autopackage.sh@48 -- $ timing_finish 00:13:59.762 13:37:38 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:13:59.762 13:37:38 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:13:59.762 + [[ -n 1261 ]] 00:13:59.762 + sudo kill 1261 00:13:59.771 [Pipeline] } 00:13:59.790 [Pipeline] // timeout 00:13:59.795 [Pipeline] } 00:13:59.813 [Pipeline] // stage 00:13:59.818 [Pipeline] } 00:13:59.835 [Pipeline] // catchError 00:13:59.844 [Pipeline] stage 00:13:59.847 [Pipeline] { (Stop VM) 00:13:59.861 [Pipeline] sh 00:14:00.144 + vagrant halt 00:14:02.681 ==> default: Halting domain... 00:14:20.783 [Pipeline] sh 00:14:21.089 + vagrant destroy -f 00:14:24.379 ==> default: Removing domain... 00:14:24.392 [Pipeline] sh 00:14:24.675 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:14:24.686 [Pipeline] } 00:14:24.706 [Pipeline] // stage 00:14:24.712 [Pipeline] } 00:14:24.732 [Pipeline] // dir 00:14:24.737 [Pipeline] } 00:14:24.755 [Pipeline] // wrap 00:14:24.761 [Pipeline] } 00:14:24.776 [Pipeline] // catchError 00:14:24.784 [Pipeline] stage 00:14:24.787 [Pipeline] { (Epilogue) 00:14:24.800 [Pipeline] sh 00:14:25.114 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:14:25.130 [Pipeline] catchError 00:14:25.132 [Pipeline] { 00:14:25.147 [Pipeline] sh 00:14:25.429 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:14:25.429 Artifacts sizes are good 00:14:25.441 [Pipeline] } 00:14:25.461 [Pipeline] // catchError 00:14:25.471 [Pipeline] archiveArtifacts 00:14:25.478 Archiving artifacts 00:14:25.530 [Pipeline] cleanWs 00:14:25.541 [WS-CLEANUP] Deleting project workspace... 00:14:25.541 [WS-CLEANUP] Deferred wipeout is used... 00:14:25.548 [WS-CLEANUP] done 00:14:25.550 [Pipeline] } 00:14:25.569 [Pipeline] // stage 00:14:25.576 [Pipeline] } 00:14:25.593 [Pipeline] // node 00:14:25.599 [Pipeline] End of Pipeline 00:14:25.633 Finished: SUCCESS