00:00:00.000 Started by upstream project "autotest-per-patch" build number 132355 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.011 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.033 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.050 Using shallow fetch with depth 1 00:00:00.050 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.050 > git --version # timeout=10 00:00:00.070 > git --version # 'git version 2.39.2' 00:00:00.070 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.096 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.096 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.293 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.309 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.323 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.323 > git config core.sparsecheckout # timeout=10 00:00:02.335 > git read-tree -mu HEAD # timeout=10 00:00:02.354 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.383 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.383 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.591 [Pipeline] Start of Pipeline 00:00:02.607 [Pipeline] library 00:00:02.608 Loading library shm_lib@master 00:00:02.608 Library shm_lib@master is cached. Copying from home. 00:00:02.624 [Pipeline] node 00:00:17.627 Still waiting to schedule task 00:00:17.627 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:36.399 Running on VM-host-WFP7 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:03:36.401 [Pipeline] { 00:03:36.412 [Pipeline] catchError 00:03:36.414 [Pipeline] { 00:03:36.432 [Pipeline] wrap 00:03:36.442 [Pipeline] { 00:03:36.454 [Pipeline] stage 00:03:36.457 [Pipeline] { (Prologue) 00:03:36.479 [Pipeline] echo 00:03:36.480 Node: VM-host-WFP7 00:03:36.488 [Pipeline] cleanWs 00:03:36.498 [WS-CLEANUP] Deleting project workspace... 00:03:36.498 [WS-CLEANUP] Deferred wipeout is used... 00:03:36.506 [WS-CLEANUP] done 00:03:36.708 [Pipeline] setCustomBuildProperty 00:03:36.790 [Pipeline] httpRequest 00:03:37.108 [Pipeline] echo 00:03:37.110 Sorcerer 10.211.164.20 is alive 00:03:37.120 [Pipeline] retry 00:03:37.123 [Pipeline] { 00:03:37.137 [Pipeline] httpRequest 00:03:37.142 HttpMethod: GET 00:03:37.143 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:37.144 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:37.144 Response Code: HTTP/1.1 200 OK 00:03:37.145 Success: Status code 200 is in the accepted range: 200,404 00:03:37.146 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:37.292 [Pipeline] } 00:03:37.310 [Pipeline] // retry 00:03:37.318 [Pipeline] sh 00:03:37.601 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:37.617 [Pipeline] httpRequest 00:03:37.922 [Pipeline] echo 00:03:37.924 Sorcerer 10.211.164.20 is alive 00:03:37.935 [Pipeline] retry 00:03:37.937 [Pipeline] { 00:03:37.951 [Pipeline] httpRequest 00:03:37.960 HttpMethod: GET 00:03:37.961 URL: http://10.211.164.20/packages/spdk_4c583db5906f4f6b38fef624e5781fad0ec0bfea.tar.gz 00:03:37.970 Sending request to url: http://10.211.164.20/packages/spdk_4c583db5906f4f6b38fef624e5781fad0ec0bfea.tar.gz 00:03:37.974 Response Code: HTTP/1.1 200 OK 00:03:37.975 Success: Status code 200 is in the accepted range: 200,404 00:03:37.976 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_4c583db5906f4f6b38fef624e5781fad0ec0bfea.tar.gz 00:03:40.243 [Pipeline] } 00:03:40.261 [Pipeline] // retry 00:03:40.269 [Pipeline] sh 00:03:40.554 + tar --no-same-owner -xf spdk_4c583db5906f4f6b38fef624e5781fad0ec0bfea.tar.gz 00:03:43.886 [Pipeline] sh 00:03:44.169 + git -C spdk log --oneline -n5 00:03:44.169 4c583db59 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:03:44.169 c788bae60 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:03:44.169 e4689ab38 test/nvmf: Remove all transport conditions from the test suites 00:03:44.169 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:03:44.169 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:03:44.188 [Pipeline] writeFile 00:03:44.202 [Pipeline] sh 00:03:44.486 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:44.500 [Pipeline] sh 00:03:44.788 + cat autorun-spdk.conf 00:03:44.788 SPDK_TEST_UNITTEST=1 00:03:44.788 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:44.788 SPDK_TEST_NVME=1 00:03:44.788 SPDK_TEST_BLOCKDEV=1 00:03:44.788 SPDK_RUN_ASAN=1 00:03:44.788 SPDK_RUN_UBSAN=1 00:03:44.788 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:44.796 RUN_NIGHTLY=0 00:03:44.798 [Pipeline] } 00:03:44.812 [Pipeline] // stage 00:03:44.828 [Pipeline] stage 00:03:44.831 [Pipeline] { (Run VM) 00:03:44.844 [Pipeline] sh 00:03:45.129 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:45.129 + echo 'Start stage prepare_nvme.sh' 00:03:45.129 Start stage prepare_nvme.sh 00:03:45.129 + [[ -n 4 ]] 00:03:45.129 + disk_prefix=ex4 00:03:45.129 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:03:45.129 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:03:45.129 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:03:45.129 ++ SPDK_TEST_UNITTEST=1 00:03:45.129 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.129 ++ SPDK_TEST_NVME=1 00:03:45.129 ++ SPDK_TEST_BLOCKDEV=1 00:03:45.129 ++ SPDK_RUN_ASAN=1 00:03:45.129 ++ SPDK_RUN_UBSAN=1 00:03:45.129 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:45.129 ++ RUN_NIGHTLY=0 00:03:45.129 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:03:45.129 + nvme_files=() 00:03:45.129 + declare -A nvme_files 00:03:45.129 + backend_dir=/var/lib/libvirt/images/backends 00:03:45.129 + nvme_files['nvme.img']=5G 00:03:45.129 + nvme_files['nvme-cmb.img']=5G 00:03:45.129 + nvme_files['nvme-multi0.img']=4G 00:03:45.129 + nvme_files['nvme-multi1.img']=4G 00:03:45.129 + nvme_files['nvme-multi2.img']=4G 00:03:45.129 + nvme_files['nvme-openstack.img']=8G 00:03:45.129 + nvme_files['nvme-zns.img']=5G 00:03:45.129 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:45.129 + (( SPDK_TEST_FTL == 1 )) 00:03:45.129 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:45.129 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:03:45.129 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:45.129 + for nvme in "${!nvme_files[@]}" 00:03:45.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:03:45.389 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:45.389 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:03:45.389 + echo 'End stage prepare_nvme.sh' 00:03:45.389 End stage prepare_nvme.sh 00:03:45.402 [Pipeline] sh 00:03:45.685 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:45.685 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2404 00:03:45.686 00:03:45.686 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:03:45.686 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:03:45.686 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:03:45.686 HELP=0 00:03:45.686 DRY_RUN=0 00:03:45.686 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:03:45.686 NVME_DISKS_TYPE=nvme, 00:03:45.686 NVME_AUTO_CREATE=0 00:03:45.686 NVME_DISKS_NAMESPACES=, 00:03:45.686 NVME_CMB=, 00:03:45.686 NVME_PMR=, 00:03:45.686 NVME_ZNS=, 00:03:45.686 NVME_MS=, 00:03:45.686 NVME_FDP=, 00:03:45.686 SPDK_VAGRANT_DISTRO=ubuntu2404 00:03:45.686 SPDK_VAGRANT_VMCPU=10 00:03:45.686 SPDK_VAGRANT_VMRAM=12288 00:03:45.686 SPDK_VAGRANT_PROVIDER=libvirt 00:03:45.686 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:45.686 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:45.686 SPDK_OPENSTACK_NETWORK=0 00:03:45.686 VAGRANT_PACKAGE_BOX=0 00:03:45.686 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:45.686 FORCE_DISTRO=true 00:03:45.686 VAGRANT_BOX_VERSION= 00:03:45.686 EXTRA_VAGRANTFILES= 00:03:45.686 NIC_MODEL=virtio 00:03:45.686 00:03:45.686 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:03:45.686 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:03:48.237 Bringing machine 'default' up with 'libvirt' provider... 00:03:48.811 ==> default: Creating image (snapshot of base box volume). 00:03:48.811 ==> default: Creating domain with the following settings... 00:03:48.811 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1732086832_71cf2d7157af3c1bdad5 00:03:48.811 ==> default: -- Domain type: kvm 00:03:48.811 ==> default: -- Cpus: 10 00:03:48.811 ==> default: -- Feature: acpi 00:03:48.811 ==> default: -- Feature: apic 00:03:48.811 ==> default: -- Feature: pae 00:03:48.811 ==> default: -- Memory: 12288M 00:03:48.811 ==> default: -- Memory Backing: hugepages: 00:03:48.811 ==> default: -- Management MAC: 00:03:48.811 ==> default: -- Loader: 00:03:48.811 ==> default: -- Nvram: 00:03:48.811 ==> default: -- Base box: spdk/ubuntu2404 00:03:48.811 ==> default: -- Storage pool: default 00:03:48.811 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1732086832_71cf2d7157af3c1bdad5.img (20G) 00:03:48.811 ==> default: -- Volume Cache: default 00:03:48.811 ==> default: -- Kernel: 00:03:48.811 ==> default: -- Initrd: 00:03:48.811 ==> default: -- Graphics Type: vnc 00:03:48.811 ==> default: -- Graphics Port: -1 00:03:48.811 ==> default: -- Graphics IP: 127.0.0.1 00:03:48.811 ==> default: -- Graphics Password: Not defined 00:03:48.811 ==> default: -- Video Type: cirrus 00:03:48.811 ==> default: -- Video VRAM: 9216 00:03:48.811 ==> default: -- Sound Type: 00:03:48.811 ==> default: -- Keymap: en-us 00:03:48.811 ==> default: -- TPM Path: 00:03:48.811 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:48.811 ==> default: -- Command line args: 00:03:48.811 ==> default: -> value=-device, 00:03:48.811 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:48.811 ==> default: -> value=-drive, 00:03:48.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:03:48.811 ==> default: -> value=-device, 00:03:48.811 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.070 ==> default: Creating shared folders metadata... 00:03:49.070 ==> default: Starting domain. 00:03:50.447 ==> default: Waiting for domain to get an IP address... 00:04:00.432 ==> default: Waiting for SSH to become available... 00:04:01.371 ==> default: Configuring and enabling network interfaces... 00:04:06.688 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:12.000 ==> default: Mounting SSHFS shared folder... 00:04:12.567 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:04:12.567 ==> default: Checking Mount.. 00:04:13.499 ==> default: Folder Successfully Mounted! 00:04:13.499 ==> default: Running provisioner: file... 00:04:13.757 default: ~/.gitconfig => .gitconfig 00:04:14.328 00:04:14.328 SUCCESS! 00:04:14.328 00:04:14.328 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:04:14.328 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:14.328 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:04:14.328 00:04:14.405 [Pipeline] } 00:04:14.420 [Pipeline] // stage 00:04:14.429 [Pipeline] dir 00:04:14.429 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:04:14.431 [Pipeline] { 00:04:14.444 [Pipeline] catchError 00:04:14.446 [Pipeline] { 00:04:14.459 [Pipeline] sh 00:04:14.739 + vagrant ssh-config --host vagrant 00:04:14.740 + sed -ne /^Host/,$p 00:04:14.740 + tee ssh_conf 00:04:18.053 Host vagrant 00:04:18.053 HostName 192.168.121.53 00:04:18.053 User vagrant 00:04:18.053 Port 22 00:04:18.053 UserKnownHostsFile /dev/null 00:04:18.053 StrictHostKeyChecking no 00:04:18.053 PasswordAuthentication no 00:04:18.053 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:04:18.053 IdentitiesOnly yes 00:04:18.053 LogLevel FATAL 00:04:18.053 ForwardAgent yes 00:04:18.053 ForwardX11 yes 00:04:18.053 00:04:18.067 [Pipeline] withEnv 00:04:18.069 [Pipeline] { 00:04:18.083 [Pipeline] sh 00:04:18.365 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:18.365 source /etc/os-release 00:04:18.365 [[ -e /image.version ]] && img=$(< /image.version) 00:04:18.365 # Minimal, systemd-like check. 00:04:18.365 if [[ -e /.dockerenv ]]; then 00:04:18.365 # Clear garbage from the node's name: 00:04:18.365 # agt-er_autotest_547-896 -> autotest_547-896 00:04:18.365 # $HOSTNAME is the actual container id 00:04:18.365 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:18.365 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:18.365 # We can assume this is a mount from a host where container is running, 00:04:18.365 # so fetch its hostname to easily identify the target swarm worker. 00:04:18.365 container="$(< /etc/hostname) ($agent)" 00:04:18.365 else 00:04:18.365 # Fallback 00:04:18.365 container=$agent 00:04:18.365 fi 00:04:18.365 fi 00:04:18.365 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:18.365 00:04:18.636 [Pipeline] } 00:04:18.652 [Pipeline] // withEnv 00:04:18.661 [Pipeline] setCustomBuildProperty 00:04:18.676 [Pipeline] stage 00:04:18.679 [Pipeline] { (Tests) 00:04:18.696 [Pipeline] sh 00:04:18.980 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:19.254 [Pipeline] sh 00:04:19.537 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:19.814 [Pipeline] timeout 00:04:19.814 Timeout set to expire in 1 hr 30 min 00:04:19.816 [Pipeline] { 00:04:19.831 [Pipeline] sh 00:04:20.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:20.689 HEAD is now at 4c583db59 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:04:20.704 [Pipeline] sh 00:04:20.988 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:21.264 [Pipeline] sh 00:04:21.552 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:21.829 [Pipeline] sh 00:04:22.110 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:04:22.369 ++ readlink -f spdk_repo 00:04:22.369 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:22.369 + [[ -n /home/vagrant/spdk_repo ]] 00:04:22.369 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:22.369 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:22.369 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:22.369 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:22.369 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:22.369 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:04:22.369 + cd /home/vagrant/spdk_repo 00:04:22.369 + source /etc/os-release 00:04:22.369 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:04:22.369 ++ NAME=Ubuntu 00:04:22.369 ++ VERSION_ID=24.04 00:04:22.369 ++ VERSION='24.04 LTS (Noble Numbat)' 00:04:22.369 ++ VERSION_CODENAME=noble 00:04:22.369 ++ ID=ubuntu 00:04:22.369 ++ ID_LIKE=debian 00:04:22.369 ++ HOME_URL=https://www.ubuntu.com/ 00:04:22.369 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:04:22.369 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:04:22.369 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:04:22.369 ++ UBUNTU_CODENAME=noble 00:04:22.369 ++ LOGO=ubuntu-logo 00:04:22.369 + uname -a 00:04:22.369 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:04:22.369 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:22.936 Hugepages 00:04:22.936 node hugesize free / total 00:04:22.937 node0 1048576kB 0 / 0 00:04:22.937 node0 2048kB 0 / 0 00:04:22.937 00:04:22.937 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.937 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:22.937 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:22.937 + rm -f /tmp/spdk-ld-path 00:04:22.937 + source autorun-spdk.conf 00:04:22.937 ++ SPDK_TEST_UNITTEST=1 00:04:22.937 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:22.937 ++ SPDK_TEST_NVME=1 00:04:22.937 ++ SPDK_TEST_BLOCKDEV=1 00:04:22.937 ++ SPDK_RUN_ASAN=1 00:04:22.937 ++ SPDK_RUN_UBSAN=1 00:04:22.937 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:22.937 ++ RUN_NIGHTLY=0 00:04:22.937 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:22.937 + [[ -n '' ]] 00:04:22.937 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:22.937 + for M in /var/spdk/build-*-manifest.txt 00:04:22.937 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:22.937 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:22.937 + for M in /var/spdk/build-*-manifest.txt 00:04:22.937 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:22.937 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:22.937 ++ uname 00:04:22.937 + [[ Linux == \L\i\n\u\x ]] 00:04:22.937 + sudo dmesg -T 00:04:22.937 + sudo dmesg --clear 00:04:22.937 + dmesg_pid=2402 00:04:22.937 + sudo dmesg -Tw 00:04:22.937 + [[ Ubuntu == FreeBSD ]] 00:04:22.937 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:22.937 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:22.937 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:22.937 + [[ -x /usr/src/fio-static/fio ]] 00:04:22.937 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:22.937 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:22.937 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:22.937 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:04:22.937 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:22.937 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:22.937 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:22.937 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.197 07:14:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:23.197 07:14:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_TEST_UNITTEST=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_BLOCKDEV=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.197 07:14:25 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=0 00:04:23.197 07:14:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:23.197 07:14:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.197 07:14:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:23.197 07:14:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.197 07:14:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:23.197 07:14:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:23.197 07:14:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.197 07:14:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.197 07:14:25 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:23.197 07:14:25 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:23.197 07:14:25 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:23.197 07:14:25 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:23.197 07:14:25 -- paths/export.sh@6 -- $ export PATH 00:04:23.197 07:14:25 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:23.197 07:14:25 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:23.197 07:14:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:23.197 07:14:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086865.XXXXXX 00:04:23.197 07:14:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086865.zDLgxd 00:04:23.197 07:14:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:23.197 07:14:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:23.197 07:14:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:23.197 07:14:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:23.197 07:14:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:23.197 07:14:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:23.197 07:14:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:23.197 07:14:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:23.197 07:14:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:04:23.197 07:14:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:23.197 07:14:25 -- pm/common@17 -- $ local monitor 00:04:23.197 07:14:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.197 07:14:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.197 07:14:25 -- pm/common@21 -- $ date +%s 00:04:23.197 07:14:25 -- pm/common@25 -- $ sleep 1 00:04:23.197 07:14:25 -- pm/common@21 -- $ date +%s 00:04:23.197 07:14:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086865 00:04:23.197 07:14:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086865 00:04:23.197 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086865_collect-cpu-load.pm.log 00:04:23.197 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086865_collect-vmstat.pm.log 00:04:24.135 07:14:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:24.135 07:14:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:24.135 07:14:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:24.135 07:14:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:24.135 07:14:26 -- spdk/autobuild.sh@16 -- $ date -u 00:04:24.135 Wed Nov 20 07:14:26 UTC 2024 00:04:24.135 07:14:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:24.394 v25.01-pre-206-g4c583db59 00:04:24.394 07:14:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:24.394 07:14:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:24.394 07:14:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:24.394 07:14:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:24.394 07:14:26 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.394 ************************************ 00:04:24.394 START TEST asan 00:04:24.394 ************************************ 00:04:24.394 using asan 00:04:24.394 07:14:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:24.394 00:04:24.394 real 0m0.000s 00:04:24.394 user 0m0.000s 00:04:24.394 sys 0m0.000s 00:04:24.394 07:14:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:24.394 07:14:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:24.394 ************************************ 00:04:24.394 END TEST asan 00:04:24.394 ************************************ 00:04:24.394 07:14:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:24.394 07:14:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:24.394 07:14:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:24.394 07:14:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:24.394 07:14:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.394 ************************************ 00:04:24.394 START TEST ubsan 00:04:24.394 ************************************ 00:04:24.394 using ubsan 00:04:24.394 07:14:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:24.394 00:04:24.394 real 0m0.001s 00:04:24.394 user 0m0.001s 00:04:24.394 sys 0m0.000s 00:04:24.394 07:14:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:24.394 07:14:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:24.394 ************************************ 00:04:24.394 END TEST ubsan 00:04:24.394 ************************************ 00:04:24.394 07:14:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:24.394 07:14:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:24.394 07:14:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:24.395 07:14:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:24.395 07:14:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:24.395 07:14:27 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:04:24.395 07:14:27 -- spdk/autobuild.sh@58 -- $ unittest_build 00:04:24.395 07:14:27 -- common/autobuild_common.sh@433 -- $ run_test unittest_build _unittest_build 00:04:24.395 07:14:27 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:04:24.395 07:14:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:24.395 07:14:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.395 ************************************ 00:04:24.395 START TEST unittest_build 00:04:24.395 ************************************ 00:04:24.395 07:14:27 unittest_build -- common/autotest_common.sh@1129 -- $ _unittest_build 00:04:24.395 07:14:27 unittest_build -- common/autobuild_common.sh@424 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --without-shared 00:04:24.653 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:24.653 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:25.265 Using 'verbs' RDMA provider 00:04:44.285 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:59.198 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:59.198 Creating mk/config.mk...done. 00:04:59.198 Creating mk/cc.flags.mk...done. 00:04:59.198 Type 'make' to build. 00:04:59.198 07:15:02 unittest_build -- common/autobuild_common.sh@425 -- $ make -j10 00:04:59.198 make[1]: Nothing to be done for 'all'. 00:05:09.233 The Meson build system 00:05:09.233 Version: 1.4.1 00:05:09.233 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:09.233 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:09.233 Build type: native build 00:05:09.233 Program cat found: YES (/usr/bin/cat) 00:05:09.233 Project name: DPDK 00:05:09.233 Project version: 24.03.0 00:05:09.233 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:05:09.233 C linker for the host machine: cc ld.bfd 2.42 00:05:09.233 Host machine cpu family: x86_64 00:05:09.233 Host machine cpu: x86_64 00:05:09.233 Message: ## Building in Developer Mode ## 00:05:09.233 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:09.233 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:09.233 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:09.233 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:05:09.233 Program cat found: YES (/usr/bin/cat) 00:05:09.233 Compiler for C supports arguments -march=native: YES 00:05:09.233 Checking for size of "void *" : 8 00:05:09.233 Checking for size of "void *" : 8 (cached) 00:05:09.233 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:09.233 Library m found: YES 00:05:09.233 Library numa found: YES 00:05:09.233 Has header "numaif.h" : YES 00:05:09.233 Library fdt found: NO 00:05:09.233 Library execinfo found: NO 00:05:09.233 Has header "execinfo.h" : YES 00:05:09.233 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:05:09.233 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:09.233 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:09.233 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:09.233 Run-time dependency openssl found: YES 3.0.13 00:05:09.233 Run-time dependency libpcap found: NO (tried pkgconfig) 00:05:09.233 Library pcap found: NO 00:05:09.233 Compiler for C supports arguments -Wcast-qual: YES 00:05:09.233 Compiler for C supports arguments -Wdeprecated: YES 00:05:09.233 Compiler for C supports arguments -Wformat: YES 00:05:09.233 Compiler for C supports arguments -Wformat-nonliteral: YES 00:05:09.233 Compiler for C supports arguments -Wformat-security: YES 00:05:09.233 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:09.233 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:09.233 Compiler for C supports arguments -Wnested-externs: YES 00:05:09.234 Compiler for C supports arguments -Wold-style-definition: YES 00:05:09.234 Compiler for C supports arguments -Wpointer-arith: YES 00:05:09.234 Compiler for C supports arguments -Wsign-compare: YES 00:05:09.234 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:09.234 Compiler for C supports arguments -Wundef: YES 00:05:09.234 Compiler for C supports arguments -Wwrite-strings: YES 00:05:09.234 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:09.234 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:09.234 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:09.234 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:09.234 Program objdump found: YES (/usr/bin/objdump) 00:05:09.234 Compiler for C supports arguments -mavx512f: YES 00:05:09.234 Checking if "AVX512 checking" compiles: YES 00:05:09.234 Fetching value of define "__SSE4_2__" : 1 00:05:09.234 Fetching value of define "__AES__" : 1 00:05:09.234 Fetching value of define "__AVX__" : 1 00:05:09.234 Fetching value of define "__AVX2__" : 1 00:05:09.234 Fetching value of define "__AVX512BW__" : 1 00:05:09.234 Fetching value of define "__AVX512CD__" : 1 00:05:09.234 Fetching value of define "__AVX512DQ__" : 1 00:05:09.234 Fetching value of define "__AVX512F__" : 1 00:05:09.234 Fetching value of define "__AVX512VL__" : 1 00:05:09.234 Fetching value of define "__PCLMUL__" : 1 00:05:09.234 Fetching value of define "__RDRND__" : 1 00:05:09.234 Fetching value of define "__RDSEED__" : 1 00:05:09.234 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:09.234 Fetching value of define "__znver1__" : (undefined) 00:05:09.234 Fetching value of define "__znver2__" : (undefined) 00:05:09.234 Fetching value of define "__znver3__" : (undefined) 00:05:09.234 Fetching value of define "__znver4__" : (undefined) 00:05:09.234 Library asan found: YES 00:05:09.234 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:09.234 Message: lib/log: Defining dependency "log" 00:05:09.234 Message: lib/kvargs: Defining dependency "kvargs" 00:05:09.234 Message: lib/telemetry: Defining dependency "telemetry" 00:05:09.234 Library rt found: YES 00:05:09.234 Checking for function "getentropy" : NO 00:05:09.234 Message: lib/eal: Defining dependency "eal" 00:05:09.234 Message: lib/ring: Defining dependency "ring" 00:05:09.234 Message: lib/rcu: Defining dependency "rcu" 00:05:09.234 Message: lib/mempool: Defining dependency "mempool" 00:05:09.234 Message: lib/mbuf: Defining dependency "mbuf" 00:05:09.234 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:09.234 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:09.234 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:09.234 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:09.234 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:09.234 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:09.234 Compiler for C supports arguments -mpclmul: YES 00:05:09.234 Compiler for C supports arguments -maes: YES 00:05:09.234 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:09.234 Compiler for C supports arguments -mavx512bw: YES 00:05:09.234 Compiler for C supports arguments -mavx512dq: YES 00:05:09.234 Compiler for C supports arguments -mavx512vl: YES 00:05:09.234 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:09.234 Compiler for C supports arguments -mavx2: YES 00:05:09.234 Compiler for C supports arguments -mavx: YES 00:05:09.234 Message: lib/net: Defining dependency "net" 00:05:09.234 Message: lib/meter: Defining dependency "meter" 00:05:09.234 Message: lib/ethdev: Defining dependency "ethdev" 00:05:09.234 Message: lib/pci: Defining dependency "pci" 00:05:09.234 Message: lib/cmdline: Defining dependency "cmdline" 00:05:09.234 Message: lib/hash: Defining dependency "hash" 00:05:09.234 Message: lib/timer: Defining dependency "timer" 00:05:09.234 Message: lib/compressdev: Defining dependency "compressdev" 00:05:09.234 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:09.234 Message: lib/dmadev: Defining dependency "dmadev" 00:05:09.234 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:09.234 Message: lib/power: Defining dependency "power" 00:05:09.234 Message: lib/reorder: Defining dependency "reorder" 00:05:09.234 Message: lib/security: Defining dependency "security" 00:05:09.234 Has header "linux/userfaultfd.h" : YES 00:05:09.234 Has header "linux/vduse.h" : YES 00:05:09.234 Message: lib/vhost: Defining dependency "vhost" 00:05:09.234 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:09.234 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:09.234 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:09.234 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:09.234 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:09.234 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:09.234 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:09.234 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:09.234 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:09.234 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:09.234 Program doxygen found: YES (/usr/bin/doxygen) 00:05:09.234 Configuring doxy-api-html.conf using configuration 00:05:09.234 Configuring doxy-api-man.conf using configuration 00:05:09.234 Program mandb found: YES (/usr/bin/mandb) 00:05:09.234 Program sphinx-build found: NO 00:05:09.234 Configuring rte_build_config.h using configuration 00:05:09.234 Message: 00:05:09.234 ================= 00:05:09.234 Applications Enabled 00:05:09.234 ================= 00:05:09.234 00:05:09.234 apps: 00:05:09.234 00:05:09.234 00:05:09.234 Message: 00:05:09.234 ================= 00:05:09.234 Libraries Enabled 00:05:09.234 ================= 00:05:09.234 00:05:09.234 libs: 00:05:09.234 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:09.234 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:09.234 cryptodev, dmadev, power, reorder, security, vhost, 00:05:09.234 00:05:09.234 Message: 00:05:09.234 =============== 00:05:09.234 Drivers Enabled 00:05:09.234 =============== 00:05:09.234 00:05:09.234 common: 00:05:09.234 00:05:09.234 bus: 00:05:09.234 pci, vdev, 00:05:09.234 mempool: 00:05:09.234 ring, 00:05:09.234 dma: 00:05:09.234 00:05:09.234 net: 00:05:09.234 00:05:09.234 crypto: 00:05:09.234 00:05:09.234 compress: 00:05:09.234 00:05:09.234 vdpa: 00:05:09.234 00:05:09.234 00:05:09.234 Message: 00:05:09.234 ================= 00:05:09.234 Content Skipped 00:05:09.234 ================= 00:05:09.234 00:05:09.234 apps: 00:05:09.234 dumpcap: explicitly disabled via build config 00:05:09.234 graph: explicitly disabled via build config 00:05:09.234 pdump: explicitly disabled via build config 00:05:09.234 proc-info: explicitly disabled via build config 00:05:09.234 test-acl: explicitly disabled via build config 00:05:09.234 test-bbdev: explicitly disabled via build config 00:05:09.234 test-cmdline: explicitly disabled via build config 00:05:09.234 test-compress-perf: explicitly disabled via build config 00:05:09.234 test-crypto-perf: explicitly disabled via build config 00:05:09.234 test-dma-perf: explicitly disabled via build config 00:05:09.234 test-eventdev: explicitly disabled via build config 00:05:09.234 test-fib: explicitly disabled via build config 00:05:09.234 test-flow-perf: explicitly disabled via build config 00:05:09.234 test-gpudev: explicitly disabled via build config 00:05:09.234 test-mldev: explicitly disabled via build config 00:05:09.234 test-pipeline: explicitly disabled via build config 00:05:09.234 test-pmd: explicitly disabled via build config 00:05:09.234 test-regex: explicitly disabled via build config 00:05:09.234 test-sad: explicitly disabled via build config 00:05:09.234 test-security-perf: explicitly disabled via build config 00:05:09.234 00:05:09.234 libs: 00:05:09.234 argparse: explicitly disabled via build config 00:05:09.234 metrics: explicitly disabled via build config 00:05:09.234 acl: explicitly disabled via build config 00:05:09.234 bbdev: explicitly disabled via build config 00:05:09.234 bitratestats: explicitly disabled via build config 00:05:09.234 bpf: explicitly disabled via build config 00:05:09.234 cfgfile: explicitly disabled via build config 00:05:09.234 distributor: explicitly disabled via build config 00:05:09.234 efd: explicitly disabled via build config 00:05:09.234 eventdev: explicitly disabled via build config 00:05:09.234 dispatcher: explicitly disabled via build config 00:05:09.234 gpudev: explicitly disabled via build config 00:05:09.234 gro: explicitly disabled via build config 00:05:09.234 gso: explicitly disabled via build config 00:05:09.234 ip_frag: explicitly disabled via build config 00:05:09.234 jobstats: explicitly disabled via build config 00:05:09.234 latencystats: explicitly disabled via build config 00:05:09.234 lpm: explicitly disabled via build config 00:05:09.234 member: explicitly disabled via build config 00:05:09.234 pcapng: explicitly disabled via build config 00:05:09.234 rawdev: explicitly disabled via build config 00:05:09.234 regexdev: explicitly disabled via build config 00:05:09.234 mldev: explicitly disabled via build config 00:05:09.234 rib: explicitly disabled via build config 00:05:09.234 sched: explicitly disabled via build config 00:05:09.234 stack: explicitly disabled via build config 00:05:09.234 ipsec: explicitly disabled via build config 00:05:09.234 pdcp: explicitly disabled via build config 00:05:09.234 fib: explicitly disabled via build config 00:05:09.234 port: explicitly disabled via build config 00:05:09.234 pdump: explicitly disabled via build config 00:05:09.234 table: explicitly disabled via build config 00:05:09.234 pipeline: explicitly disabled via build config 00:05:09.234 graph: explicitly disabled via build config 00:05:09.234 node: explicitly disabled via build config 00:05:09.234 00:05:09.234 drivers: 00:05:09.234 common/cpt: not in enabled drivers build config 00:05:09.234 common/dpaax: not in enabled drivers build config 00:05:09.234 common/iavf: not in enabled drivers build config 00:05:09.234 common/idpf: not in enabled drivers build config 00:05:09.234 common/ionic: not in enabled drivers build config 00:05:09.234 common/mvep: not in enabled drivers build config 00:05:09.235 common/octeontx: not in enabled drivers build config 00:05:09.235 bus/auxiliary: not in enabled drivers build config 00:05:09.235 bus/cdx: not in enabled drivers build config 00:05:09.235 bus/dpaa: not in enabled drivers build config 00:05:09.235 bus/fslmc: not in enabled drivers build config 00:05:09.235 bus/ifpga: not in enabled drivers build config 00:05:09.235 bus/platform: not in enabled drivers build config 00:05:09.235 bus/uacce: not in enabled drivers build config 00:05:09.235 bus/vmbus: not in enabled drivers build config 00:05:09.235 common/cnxk: not in enabled drivers build config 00:05:09.235 common/mlx5: not in enabled drivers build config 00:05:09.235 common/nfp: not in enabled drivers build config 00:05:09.235 common/nitrox: not in enabled drivers build config 00:05:09.235 common/qat: not in enabled drivers build config 00:05:09.235 common/sfc_efx: not in enabled drivers build config 00:05:09.235 mempool/bucket: not in enabled drivers build config 00:05:09.235 mempool/cnxk: not in enabled drivers build config 00:05:09.235 mempool/dpaa: not in enabled drivers build config 00:05:09.235 mempool/dpaa2: not in enabled drivers build config 00:05:09.235 mempool/octeontx: not in enabled drivers build config 00:05:09.235 mempool/stack: not in enabled drivers build config 00:05:09.235 dma/cnxk: not in enabled drivers build config 00:05:09.235 dma/dpaa: not in enabled drivers build config 00:05:09.235 dma/dpaa2: not in enabled drivers build config 00:05:09.235 dma/hisilicon: not in enabled drivers build config 00:05:09.235 dma/idxd: not in enabled drivers build config 00:05:09.235 dma/ioat: not in enabled drivers build config 00:05:09.235 dma/skeleton: not in enabled drivers build config 00:05:09.235 net/af_packet: not in enabled drivers build config 00:05:09.235 net/af_xdp: not in enabled drivers build config 00:05:09.235 net/ark: not in enabled drivers build config 00:05:09.235 net/atlantic: not in enabled drivers build config 00:05:09.235 net/avp: not in enabled drivers build config 00:05:09.235 net/axgbe: not in enabled drivers build config 00:05:09.235 net/bnx2x: not in enabled drivers build config 00:05:09.235 net/bnxt: not in enabled drivers build config 00:05:09.235 net/bonding: not in enabled drivers build config 00:05:09.235 net/cnxk: not in enabled drivers build config 00:05:09.235 net/cpfl: not in enabled drivers build config 00:05:09.235 net/cxgbe: not in enabled drivers build config 00:05:09.235 net/dpaa: not in enabled drivers build config 00:05:09.235 net/dpaa2: not in enabled drivers build config 00:05:09.235 net/e1000: not in enabled drivers build config 00:05:09.235 net/ena: not in enabled drivers build config 00:05:09.235 net/enetc: not in enabled drivers build config 00:05:09.235 net/enetfec: not in enabled drivers build config 00:05:09.235 net/enic: not in enabled drivers build config 00:05:09.235 net/failsafe: not in enabled drivers build config 00:05:09.235 net/fm10k: not in enabled drivers build config 00:05:09.235 net/gve: not in enabled drivers build config 00:05:09.235 net/hinic: not in enabled drivers build config 00:05:09.235 net/hns3: not in enabled drivers build config 00:05:09.235 net/i40e: not in enabled drivers build config 00:05:09.235 net/iavf: not in enabled drivers build config 00:05:09.235 net/ice: not in enabled drivers build config 00:05:09.235 net/idpf: not in enabled drivers build config 00:05:09.235 net/igc: not in enabled drivers build config 00:05:09.235 net/ionic: not in enabled drivers build config 00:05:09.235 net/ipn3ke: not in enabled drivers build config 00:05:09.235 net/ixgbe: not in enabled drivers build config 00:05:09.235 net/mana: not in enabled drivers build config 00:05:09.235 net/memif: not in enabled drivers build config 00:05:09.235 net/mlx4: not in enabled drivers build config 00:05:09.235 net/mlx5: not in enabled drivers build config 00:05:09.235 net/mvneta: not in enabled drivers build config 00:05:09.235 net/mvpp2: not in enabled drivers build config 00:05:09.235 net/netvsc: not in enabled drivers build config 00:05:09.235 net/nfb: not in enabled drivers build config 00:05:09.235 net/nfp: not in enabled drivers build config 00:05:09.235 net/ngbe: not in enabled drivers build config 00:05:09.235 net/null: not in enabled drivers build config 00:05:09.235 net/octeontx: not in enabled drivers build config 00:05:09.235 net/octeon_ep: not in enabled drivers build config 00:05:09.235 net/pcap: not in enabled drivers build config 00:05:09.235 net/pfe: not in enabled drivers build config 00:05:09.235 net/qede: not in enabled drivers build config 00:05:09.235 net/ring: not in enabled drivers build config 00:05:09.235 net/sfc: not in enabled drivers build config 00:05:09.235 net/softnic: not in enabled drivers build config 00:05:09.235 net/tap: not in enabled drivers build config 00:05:09.235 net/thunderx: not in enabled drivers build config 00:05:09.235 net/txgbe: not in enabled drivers build config 00:05:09.235 net/vdev_netvsc: not in enabled drivers build config 00:05:09.235 net/vhost: not in enabled drivers build config 00:05:09.235 net/virtio: not in enabled drivers build config 00:05:09.235 net/vmxnet3: not in enabled drivers build config 00:05:09.235 raw/*: missing internal dependency, "rawdev" 00:05:09.235 crypto/armv8: not in enabled drivers build config 00:05:09.235 crypto/bcmfs: not in enabled drivers build config 00:05:09.235 crypto/caam_jr: not in enabled drivers build config 00:05:09.235 crypto/ccp: not in enabled drivers build config 00:05:09.235 crypto/cnxk: not in enabled drivers build config 00:05:09.235 crypto/dpaa_sec: not in enabled drivers build config 00:05:09.235 crypto/dpaa2_sec: not in enabled drivers build config 00:05:09.235 crypto/ipsec_mb: not in enabled drivers build config 00:05:09.235 crypto/mlx5: not in enabled drivers build config 00:05:09.235 crypto/mvsam: not in enabled drivers build config 00:05:09.235 crypto/nitrox: not in enabled drivers build config 00:05:09.235 crypto/null: not in enabled drivers build config 00:05:09.235 crypto/octeontx: not in enabled drivers build config 00:05:09.235 crypto/openssl: not in enabled drivers build config 00:05:09.235 crypto/scheduler: not in enabled drivers build config 00:05:09.235 crypto/uadk: not in enabled drivers build config 00:05:09.235 crypto/virtio: not in enabled drivers build config 00:05:09.235 compress/isal: not in enabled drivers build config 00:05:09.235 compress/mlx5: not in enabled drivers build config 00:05:09.235 compress/nitrox: not in enabled drivers build config 00:05:09.235 compress/octeontx: not in enabled drivers build config 00:05:09.235 compress/zlib: not in enabled drivers build config 00:05:09.235 regex/*: missing internal dependency, "regexdev" 00:05:09.235 ml/*: missing internal dependency, "mldev" 00:05:09.235 vdpa/ifc: not in enabled drivers build config 00:05:09.235 vdpa/mlx5: not in enabled drivers build config 00:05:09.235 vdpa/nfp: not in enabled drivers build config 00:05:09.235 vdpa/sfc: not in enabled drivers build config 00:05:09.235 event/*: missing internal dependency, "eventdev" 00:05:09.235 baseband/*: missing internal dependency, "bbdev" 00:05:09.235 gpu/*: missing internal dependency, "gpudev" 00:05:09.235 00:05:09.235 00:05:09.235 Build targets in project: 85 00:05:09.235 00:05:09.235 DPDK 24.03.0 00:05:09.235 00:05:09.235 User defined options 00:05:09.235 buildtype : debug 00:05:09.235 default_library : static 00:05:09.235 libdir : lib 00:05:09.235 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:09.235 b_sanitize : address 00:05:09.235 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:09.235 c_link_args : 00:05:09.235 cpu_instruction_set: native 00:05:09.235 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:05:09.235 disable_libs : mldev,jobstats,bpf,argparse,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:05:09.235 enable_docs : false 00:05:09.235 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:09.235 enable_kmods : false 00:05:09.235 max_lcores : 128 00:05:09.235 tests : false 00:05:09.235 00:05:09.235 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:05:09.825 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:09.825 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:09.825 [2/268] Linking static target lib/librte_kvargs.a 00:05:09.825 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:10.085 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:10.085 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:10.085 [6/268] Linking static target lib/librte_log.a 00:05:10.085 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:10.345 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:10.345 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:10.345 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:10.345 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:10.345 [12/268] Linking static target lib/librte_telemetry.a 00:05:10.345 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:10.345 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.345 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:10.345 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:10.345 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:10.604 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:10.862 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:10.862 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.862 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:10.862 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:10.862 [23/268] Linking target lib/librte_log.so.24.1 00:05:10.862 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:10.862 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:10.862 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:11.122 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.122 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:11.122 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:11.122 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:11.122 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:11.122 [32/268] Linking target lib/librte_kvargs.so.24.1 00:05:11.122 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:11.122 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:11.381 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:11.381 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:11.381 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:11.381 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:11.381 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:11.639 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:11.639 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:11.639 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:11.639 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:11.639 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:11.639 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:11.639 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:11.898 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:11.898 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:11.898 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:11.898 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:12.158 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:12.158 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:12.158 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:12.158 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:12.158 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:12.418 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:12.418 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:12.418 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:12.418 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:12.418 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:12.418 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:12.418 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:12.418 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:12.677 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:12.677 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:12.677 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:12.937 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:12.937 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:12.937 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:12.937 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:12.937 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:12.937 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:13.196 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:13.196 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:13.196 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:13.196 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:13.196 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:13.196 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:13.455 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:13.455 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:13.455 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:13.721 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.721 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:13.721 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:13.721 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:13.987 [86/268] Linking static target lib/librte_eal.a 00:05:13.987 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:13.987 [88/268] Linking static target lib/librte_ring.a 00:05:13.987 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:13.987 [90/268] Linking static target lib/librte_rcu.a 00:05:13.987 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:13.987 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:14.246 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:14.246 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:14.246 [95/268] Linking static target lib/librte_mempool.a 00:05:14.246 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:14.246 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:14.246 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.505 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.819 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:14.819 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:14.819 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:14.819 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:14.819 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:14.819 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:14.819 [106/268] Linking static target lib/librte_mbuf.a 00:05:14.819 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:14.819 [108/268] Linking static target lib/librte_meter.a 00:05:15.077 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:15.077 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:15.077 [111/268] Linking static target lib/librte_net.a 00:05:15.335 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.335 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.335 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:15.335 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:15.335 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:15.594 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.594 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:15.853 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.853 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:15.853 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:16.113 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:16.371 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:16.630 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:16.630 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:16.630 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:16.630 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:16.630 [128/268] Linking static target lib/librte_pci.a 00:05:16.630 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:16.630 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:16.630 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:16.888 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:16.888 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:16.888 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:16.888 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:16.888 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:16.888 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:16.888 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.888 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:17.147 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:17.147 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:17.147 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:17.147 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:17.147 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:17.147 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:17.406 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:17.406 [147/268] Linking static target lib/librte_cmdline.a 00:05:17.664 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:17.664 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:17.664 [150/268] Linking static target lib/librte_timer.a 00:05:17.665 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:17.665 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:17.923 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:18.180 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:18.180 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:18.180 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:18.180 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.438 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:18.438 [159/268] Linking static target lib/librte_hash.a 00:05:18.438 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:18.697 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:18.697 [162/268] Linking static target lib/librte_compressdev.a 00:05:18.697 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:18.697 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:18.697 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:18.955 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:18.955 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:18.955 [168/268] Linking static target lib/librte_dmadev.a 00:05:18.955 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:18.955 [170/268] Linking static target lib/librte_ethdev.a 00:05:18.955 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.955 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:19.213 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:19.471 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:19.471 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:19.471 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.471 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.471 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:19.730 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:19.730 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:19.730 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:19.730 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.989 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:19.989 [184/268] Linking static target lib/librte_cryptodev.a 00:05:20.249 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:20.520 [186/268] Linking static target lib/librte_reorder.a 00:05:20.520 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:20.520 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:20.520 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:20.520 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:20.520 [191/268] Linking static target lib/librte_power.a 00:05:20.520 [192/268] Linking static target lib/librte_security.a 00:05:20.520 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:20.779 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.039 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:21.039 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.607 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:21.607 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.607 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:21.607 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:21.607 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:21.867 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:22.125 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:22.126 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:22.126 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:22.384 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.384 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:22.385 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:22.385 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:22.385 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:22.385 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:22.643 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:22.643 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:22.643 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.643 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.643 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.643 [217/268] Linking static target drivers/librte_bus_vdev.a 00:05:22.643 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.643 [219/268] Linking static target drivers/librte_bus_pci.a 00:05:22.903 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.903 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:22.903 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:23.163 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:23.163 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.163 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:23.163 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:23.163 [227/268] Linking static target drivers/librte_mempool_ring.a 00:05:24.570 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.570 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:24.570 [230/268] Linking target lib/librte_eal.so.24.1 00:05:24.570 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:24.570 [232/268] Linking target lib/librte_pci.so.24.1 00:05:24.570 [233/268] Linking target lib/librte_meter.so.24.1 00:05:24.570 [234/268] Linking target lib/librte_timer.so.24.1 00:05:24.570 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:24.570 [236/268] Linking target lib/librte_ring.so.24.1 00:05:24.829 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:24.829 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:24.829 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:24.829 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:24.829 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:24.829 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:24.829 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:24.829 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:24.829 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:25.087 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:25.087 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:25.087 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:25.087 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:25.087 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:25.346 [251/268] Linking target lib/librte_net.so.24.1 00:05:25.346 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:25.346 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:25.346 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:25.346 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:25.346 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:25.346 [257/268] Linking target lib/librte_hash.so.24.1 00:05:25.346 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:25.346 [259/268] Linking target lib/librte_security.so.24.1 00:05:25.605 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:28.141 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.141 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:28.141 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:28.400 [264/268] Linking target lib/librte_power.so.24.1 00:05:28.968 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:28.968 [266/268] Linking static target lib/librte_vhost.a 00:05:31.534 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.534 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:31.534 INFO: autodetecting backend as ninja 00:05:31.534 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:32.937 CC lib/ut/ut.o 00:05:32.937 CC lib/log/log.o 00:05:32.937 CC lib/log/log_flags.o 00:05:32.937 CC lib/log/log_deprecated.o 00:05:32.937 CC lib/ut_mock/mock.o 00:05:32.937 LIB libspdk_ut.a 00:05:32.937 LIB libspdk_log.a 00:05:32.937 LIB libspdk_ut_mock.a 00:05:33.196 CC lib/dma/dma.o 00:05:33.196 CXX lib/trace_parser/trace.o 00:05:33.196 CC lib/ioat/ioat.o 00:05:33.196 CC lib/util/base64.o 00:05:33.196 CC lib/util/bit_array.o 00:05:33.196 CC lib/util/crc16.o 00:05:33.196 CC lib/util/crc32c.o 00:05:33.196 CC lib/util/cpuset.o 00:05:33.196 CC lib/util/crc32.o 00:05:33.196 CC lib/vfio_user/host/vfio_user_pci.o 00:05:33.456 CC lib/util/crc32_ieee.o 00:05:33.456 CC lib/util/crc64.o 00:05:33.456 CC lib/util/dif.o 00:05:33.456 LIB libspdk_dma.a 00:05:33.456 CC lib/util/fd.o 00:05:33.456 CC lib/vfio_user/host/vfio_user.o 00:05:33.456 CC lib/util/fd_group.o 00:05:33.456 CC lib/util/file.o 00:05:33.456 CC lib/util/hexlify.o 00:05:33.456 CC lib/util/iov.o 00:05:33.456 LIB libspdk_ioat.a 00:05:33.456 CC lib/util/math.o 00:05:33.456 CC lib/util/net.o 00:05:33.716 CC lib/util/pipe.o 00:05:33.716 CC lib/util/strerror_tls.o 00:05:33.716 CC lib/util/string.o 00:05:33.716 LIB libspdk_vfio_user.a 00:05:33.716 CC lib/util/uuid.o 00:05:33.716 CC lib/util/xor.o 00:05:33.716 CC lib/util/zipf.o 00:05:33.716 CC lib/util/md5.o 00:05:34.283 LIB libspdk_util.a 00:05:34.541 CC lib/conf/conf.o 00:05:34.541 CC lib/env_dpdk/env.o 00:05:34.541 CC lib/env_dpdk/memory.o 00:05:34.541 CC lib/env_dpdk/pci.o 00:05:34.541 CC lib/env_dpdk/init.o 00:05:34.541 CC lib/vmd/vmd.o 00:05:34.541 CC lib/idxd/idxd.o 00:05:34.541 CC lib/json/json_parse.o 00:05:34.541 CC lib/rdma_utils/rdma_utils.o 00:05:34.541 LIB libspdk_trace_parser.a 00:05:34.541 CC lib/idxd/idxd_user.o 00:05:34.801 LIB libspdk_conf.a 00:05:34.801 CC lib/json/json_util.o 00:05:34.801 CC lib/json/json_write.o 00:05:34.801 LIB libspdk_rdma_utils.a 00:05:34.801 CC lib/idxd/idxd_kernel.o 00:05:34.801 CC lib/env_dpdk/threads.o 00:05:34.801 CC lib/env_dpdk/pci_ioat.o 00:05:34.801 CC lib/env_dpdk/pci_virtio.o 00:05:35.061 CC lib/env_dpdk/pci_vmd.o 00:05:35.061 CC lib/vmd/led.o 00:05:35.061 CC lib/env_dpdk/pci_idxd.o 00:05:35.061 LIB libspdk_json.a 00:05:35.061 CC lib/env_dpdk/pci_event.o 00:05:35.061 CC lib/rdma_provider/common.o 00:05:35.061 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:35.061 CC lib/env_dpdk/sigbus_handler.o 00:05:35.319 CC lib/env_dpdk/pci_dpdk.o 00:05:35.319 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:35.319 LIB libspdk_vmd.a 00:05:35.319 LIB libspdk_idxd.a 00:05:35.319 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:35.319 CC lib/jsonrpc/jsonrpc_server.o 00:05:35.319 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:35.319 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:35.319 CC lib/jsonrpc/jsonrpc_client.o 00:05:35.319 LIB libspdk_rdma_provider.a 00:05:35.578 LIB libspdk_jsonrpc.a 00:05:36.149 CC lib/rpc/rpc.o 00:05:36.149 LIB libspdk_env_dpdk.a 00:05:36.408 LIB libspdk_rpc.a 00:05:36.668 CC lib/trace/trace_flags.o 00:05:36.668 CC lib/trace/trace.o 00:05:36.668 CC lib/trace/trace_rpc.o 00:05:36.668 CC lib/keyring/keyring.o 00:05:36.668 CC lib/keyring/keyring_rpc.o 00:05:36.668 CC lib/notify/notify.o 00:05:36.668 CC lib/notify/notify_rpc.o 00:05:36.927 LIB libspdk_notify.a 00:05:36.927 LIB libspdk_trace.a 00:05:36.927 LIB libspdk_keyring.a 00:05:37.495 CC lib/sock/sock.o 00:05:37.495 CC lib/thread/thread.o 00:05:37.495 CC lib/thread/iobuf.o 00:05:37.495 CC lib/sock/sock_rpc.o 00:05:37.754 LIB libspdk_sock.a 00:05:38.322 CC lib/nvme/nvme_ctrlr.o 00:05:38.322 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:38.322 CC lib/nvme/nvme_fabric.o 00:05:38.322 CC lib/nvme/nvme_ns_cmd.o 00:05:38.322 CC lib/nvme/nvme_ns.o 00:05:38.322 CC lib/nvme/nvme_pcie.o 00:05:38.322 CC lib/nvme/nvme_pcie_common.o 00:05:38.322 CC lib/nvme/nvme_qpair.o 00:05:38.322 CC lib/nvme/nvme.o 00:05:39.285 CC lib/nvme/nvme_quirks.o 00:05:39.285 CC lib/nvme/nvme_transport.o 00:05:39.285 CC lib/nvme/nvme_discovery.o 00:05:39.285 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:39.285 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:39.285 CC lib/nvme/nvme_tcp.o 00:05:39.285 LIB libspdk_thread.a 00:05:39.285 CC lib/nvme/nvme_opal.o 00:05:39.285 CC lib/nvme/nvme_io_msg.o 00:05:39.543 CC lib/nvme/nvme_poll_group.o 00:05:39.802 CC lib/nvme/nvme_zns.o 00:05:39.802 CC lib/nvme/nvme_stubs.o 00:05:39.802 CC lib/nvme/nvme_auth.o 00:05:39.802 CC lib/nvme/nvme_cuse.o 00:05:40.061 CC lib/nvme/nvme_rdma.o 00:05:40.061 CC lib/accel/accel.o 00:05:40.061 CC lib/blob/blobstore.o 00:05:40.320 CC lib/blob/request.o 00:05:40.320 CC lib/blob/zeroes.o 00:05:40.320 CC lib/blob/blob_bs_dev.o 00:05:40.579 CC lib/accel/accel_rpc.o 00:05:40.579 CC lib/accel/accel_sw.o 00:05:40.838 CC lib/init/json_config.o 00:05:40.838 CC lib/virtio/virtio.o 00:05:40.838 CC lib/init/subsystem.o 00:05:40.838 CC lib/init/subsystem_rpc.o 00:05:41.096 CC lib/init/rpc.o 00:05:41.096 CC lib/virtio/virtio_vhost_user.o 00:05:41.096 CC lib/fsdev/fsdev.o 00:05:41.096 CC lib/virtio/virtio_vfio_user.o 00:05:41.096 CC lib/fsdev/fsdev_io.o 00:05:41.096 CC lib/fsdev/fsdev_rpc.o 00:05:41.096 CC lib/virtio/virtio_pci.o 00:05:41.096 LIB libspdk_init.a 00:05:41.355 CC lib/event/app.o 00:05:41.355 CC lib/event/reactor.o 00:05:41.355 CC lib/event/log_rpc.o 00:05:41.355 CC lib/event/app_rpc.o 00:05:41.614 LIB libspdk_accel.a 00:05:41.614 LIB libspdk_virtio.a 00:05:41.614 CC lib/event/scheduler_static.o 00:05:41.614 LIB libspdk_nvme.a 00:05:41.614 CC lib/bdev/bdev.o 00:05:41.614 CC lib/bdev/bdev_zone.o 00:05:41.614 CC lib/bdev/bdev_rpc.o 00:05:41.873 CC lib/bdev/part.o 00:05:41.873 CC lib/bdev/scsi_nvme.o 00:05:41.873 LIB libspdk_fsdev.a 00:05:42.132 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:42.132 LIB libspdk_event.a 00:05:43.070 LIB libspdk_fuse_dispatcher.a 00:05:44.980 LIB libspdk_blob.a 00:05:44.980 CC lib/lvol/lvol.o 00:05:44.980 CC lib/blobfs/tree.o 00:05:44.980 CC lib/blobfs/blobfs.o 00:05:45.584 LIB libspdk_bdev.a 00:05:45.584 CC lib/ublk/ublk.o 00:05:45.843 CC lib/ublk/ublk_rpc.o 00:05:45.844 CC lib/scsi/dev.o 00:05:45.844 CC lib/scsi/lun.o 00:05:45.844 CC lib/scsi/port.o 00:05:45.844 CC lib/nvmf/ctrlr.o 00:05:45.844 CC lib/nbd/nbd.o 00:05:45.844 CC lib/ftl/ftl_core.o 00:05:45.844 CC lib/scsi/scsi.o 00:05:46.103 CC lib/scsi/scsi_bdev.o 00:05:46.103 CC lib/nvmf/ctrlr_discovery.o 00:05:46.103 CC lib/nvmf/ctrlr_bdev.o 00:05:46.103 LIB libspdk_blobfs.a 00:05:46.364 CC lib/ftl/ftl_init.o 00:05:46.364 CC lib/ftl/ftl_layout.o 00:05:46.364 LIB libspdk_lvol.a 00:05:46.364 CC lib/ftl/ftl_debug.o 00:05:46.622 CC lib/nbd/nbd_rpc.o 00:05:46.881 CC lib/ftl/ftl_io.o 00:05:46.881 CC lib/nvmf/subsystem.o 00:05:46.881 CC lib/ftl/ftl_sb.o 00:05:46.881 LIB libspdk_ublk.a 00:05:46.881 LIB libspdk_nbd.a 00:05:46.881 CC lib/nvmf/nvmf.o 00:05:46.881 CC lib/nvmf/nvmf_rpc.o 00:05:46.881 CC lib/nvmf/transport.o 00:05:46.882 CC lib/scsi/scsi_pr.o 00:05:46.882 CC lib/nvmf/tcp.o 00:05:47.141 CC lib/ftl/ftl_l2p.o 00:05:47.141 CC lib/ftl/ftl_l2p_flat.o 00:05:47.401 CC lib/scsi/scsi_rpc.o 00:05:47.660 CC lib/ftl/ftl_nv_cache.o 00:05:47.660 CC lib/scsi/task.o 00:05:47.660 CC lib/ftl/ftl_band.o 00:05:47.660 CC lib/nvmf/stubs.o 00:05:47.920 LIB libspdk_scsi.a 00:05:47.920 CC lib/ftl/ftl_band_ops.o 00:05:48.489 CC lib/ftl/ftl_writer.o 00:05:48.489 CC lib/ftl/ftl_rq.o 00:05:48.489 CC lib/ftl/ftl_reloc.o 00:05:48.748 CC lib/nvmf/mdns_server.o 00:05:48.748 CC lib/nvmf/rdma.o 00:05:48.748 CC lib/vhost/vhost.o 00:05:48.748 CC lib/iscsi/conn.o 00:05:48.748 CC lib/ftl/ftl_l2p_cache.o 00:05:49.008 CC lib/nvmf/auth.o 00:05:49.268 CC lib/iscsi/init_grp.o 00:05:49.268 CC lib/ftl/ftl_p2l.o 00:05:49.268 CC lib/ftl/ftl_p2l_log.o 00:05:49.527 CC lib/iscsi/iscsi.o 00:05:49.787 CC lib/iscsi/param.o 00:05:49.787 CC lib/iscsi/portal_grp.o 00:05:49.787 CC lib/iscsi/tgt_node.o 00:05:49.787 CC lib/ftl/mngt/ftl_mngt.o 00:05:49.787 CC lib/iscsi/iscsi_subsystem.o 00:05:50.046 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:50.046 CC lib/vhost/vhost_rpc.o 00:05:50.306 CC lib/iscsi/iscsi_rpc.o 00:05:50.306 CC lib/vhost/vhost_scsi.o 00:05:50.306 CC lib/vhost/vhost_blk.o 00:05:50.306 CC lib/vhost/rte_vhost_user.o 00:05:50.306 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:50.566 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:50.826 CC lib/iscsi/task.o 00:05:50.826 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:50.826 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:50.826 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:51.086 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:51.086 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:51.086 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:51.345 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:51.345 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:51.345 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:51.345 CC lib/ftl/utils/ftl_conf.o 00:05:51.345 CC lib/ftl/utils/ftl_md.o 00:05:51.604 CC lib/ftl/utils/ftl_mempool.o 00:05:51.604 CC lib/ftl/utils/ftl_bitmap.o 00:05:51.604 CC lib/ftl/utils/ftl_property.o 00:05:51.604 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:51.604 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:51.604 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:51.863 LIB libspdk_vhost.a 00:05:51.863 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:51.863 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:51.863 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:52.122 LIB libspdk_iscsi.a 00:05:52.122 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:52.122 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:52.122 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:52.122 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:52.122 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:52.122 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:52.122 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:52.381 CC lib/ftl/base/ftl_base_dev.o 00:05:52.381 CC lib/ftl/base/ftl_base_bdev.o 00:05:52.381 CC lib/ftl/ftl_trace.o 00:05:52.640 LIB libspdk_nvmf.a 00:05:52.899 LIB libspdk_ftl.a 00:05:53.158 CC module/env_dpdk/env_dpdk_rpc.o 00:05:53.417 CC module/accel/ioat/accel_ioat.o 00:05:53.417 CC module/keyring/linux/keyring.o 00:05:53.417 CC module/blob/bdev/blob_bdev.o 00:05:53.417 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:53.417 CC module/keyring/file/keyring.o 00:05:53.417 CC module/accel/error/accel_error.o 00:05:53.417 CC module/sock/posix/posix.o 00:05:53.417 CC module/accel/dsa/accel_dsa.o 00:05:53.417 CC module/fsdev/aio/fsdev_aio.o 00:05:53.417 LIB libspdk_env_dpdk_rpc.a 00:05:53.417 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:53.679 CC module/keyring/file/keyring_rpc.o 00:05:53.679 CC module/accel/error/accel_error_rpc.o 00:05:53.679 CC module/keyring/linux/keyring_rpc.o 00:05:53.679 CC module/accel/ioat/accel_ioat_rpc.o 00:05:53.679 LIB libspdk_scheduler_dynamic.a 00:05:53.679 LIB libspdk_blob_bdev.a 00:05:53.679 LIB libspdk_accel_error.a 00:05:53.679 CC module/fsdev/aio/linux_aio_mgr.o 00:05:53.679 CC module/accel/dsa/accel_dsa_rpc.o 00:05:53.679 LIB libspdk_accel_ioat.a 00:05:53.946 LIB libspdk_keyring_file.a 00:05:53.946 LIB libspdk_keyring_linux.a 00:05:53.946 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:53.946 LIB libspdk_accel_dsa.a 00:05:53.946 CC module/scheduler/gscheduler/gscheduler.o 00:05:53.946 CC module/accel/iaa/accel_iaa.o 00:05:53.946 CC module/bdev/error/vbdev_error.o 00:05:53.946 CC module/bdev/delay/vbdev_delay.o 00:05:54.204 LIB libspdk_scheduler_dpdk_governor.a 00:05:54.204 CC module/blobfs/bdev/blobfs_bdev.o 00:05:54.204 CC module/bdev/lvol/vbdev_lvol.o 00:05:54.204 CC module/bdev/gpt/gpt.o 00:05:54.204 CC module/bdev/gpt/vbdev_gpt.o 00:05:54.204 LIB libspdk_scheduler_gscheduler.a 00:05:54.204 CC module/accel/iaa/accel_iaa_rpc.o 00:05:54.464 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:54.464 LIB libspdk_fsdev_aio.a 00:05:54.464 CC module/bdev/malloc/bdev_malloc.o 00:05:54.464 LIB libspdk_sock_posix.a 00:05:54.464 LIB libspdk_accel_iaa.a 00:05:54.464 LIB libspdk_bdev_gpt.a 00:05:54.464 CC module/bdev/null/bdev_null.o 00:05:54.464 CC module/bdev/null/bdev_null_rpc.o 00:05:54.464 CC module/bdev/error/vbdev_error_rpc.o 00:05:54.464 LIB libspdk_blobfs_bdev.a 00:05:54.464 CC module/bdev/nvme/bdev_nvme.o 00:05:54.723 CC module/bdev/passthru/vbdev_passthru.o 00:05:54.723 CC module/bdev/raid/bdev_raid.o 00:05:54.723 CC module/bdev/raid/bdev_raid_rpc.o 00:05:54.723 CC module/bdev/split/vbdev_split.o 00:05:54.723 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:54.723 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:54.723 LIB libspdk_bdev_null.a 00:05:54.982 LIB libspdk_bdev_error.a 00:05:54.982 CC module/bdev/raid/bdev_raid_sb.o 00:05:54.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:54.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:54.982 CC module/bdev/raid/raid0.o 00:05:54.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:55.241 LIB libspdk_bdev_delay.a 00:05:55.241 LIB libspdk_bdev_malloc.a 00:05:55.241 CC module/bdev/split/vbdev_split_rpc.o 00:05:55.241 CC module/bdev/nvme/nvme_rpc.o 00:05:55.241 CC module/bdev/raid/raid1.o 00:05:55.241 LIB libspdk_bdev_passthru.a 00:05:55.241 CC module/bdev/nvme/bdev_mdns_client.o 00:05:55.242 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:55.242 LIB libspdk_bdev_lvol.a 00:05:55.500 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:55.500 LIB libspdk_bdev_split.a 00:05:55.500 CC module/bdev/nvme/vbdev_opal.o 00:05:55.500 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:55.500 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:55.500 CC module/bdev/aio/bdev_aio.o 00:05:55.500 CC module/bdev/raid/concat.o 00:05:55.758 LIB libspdk_bdev_zone_block.a 00:05:55.758 CC module/bdev/ftl/bdev_ftl.o 00:05:55.758 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:56.016 CC module/bdev/aio/bdev_aio_rpc.o 00:05:56.016 CC module/bdev/iscsi/bdev_iscsi.o 00:05:56.016 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:56.016 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:56.016 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:56.016 LIB libspdk_bdev_raid.a 00:05:56.016 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:56.016 LIB libspdk_bdev_aio.a 00:05:56.016 LIB libspdk_bdev_ftl.a 00:05:56.582 LIB libspdk_bdev_iscsi.a 00:05:56.840 LIB libspdk_bdev_virtio.a 00:05:58.746 LIB libspdk_bdev_nvme.a 00:05:59.314 CC module/event/subsystems/sock/sock.o 00:05:59.314 CC module/event/subsystems/keyring/keyring.o 00:05:59.314 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:59.314 CC module/event/subsystems/vmd/vmd.o 00:05:59.314 CC module/event/subsystems/scheduler/scheduler.o 00:05:59.314 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:59.314 CC module/event/subsystems/fsdev/fsdev.o 00:05:59.314 CC module/event/subsystems/iobuf/iobuf.o 00:05:59.314 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:59.315 LIB libspdk_event_fsdev.a 00:05:59.315 LIB libspdk_event_sock.a 00:05:59.315 LIB libspdk_event_vhost_blk.a 00:05:59.315 LIB libspdk_event_keyring.a 00:05:59.315 LIB libspdk_event_vmd.a 00:05:59.315 LIB libspdk_event_scheduler.a 00:05:59.315 LIB libspdk_event_iobuf.a 00:05:59.574 CC module/event/subsystems/accel/accel.o 00:05:59.833 LIB libspdk_event_accel.a 00:06:00.092 CC module/event/subsystems/bdev/bdev.o 00:06:00.351 LIB libspdk_event_bdev.a 00:06:00.610 CC module/event/subsystems/ublk/ublk.o 00:06:00.610 CC module/event/subsystems/nbd/nbd.o 00:06:00.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:00.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:00.610 CC module/event/subsystems/scsi/scsi.o 00:06:00.868 LIB libspdk_event_nbd.a 00:06:00.868 LIB libspdk_event_scsi.a 00:06:00.868 LIB libspdk_event_ublk.a 00:06:00.868 LIB libspdk_event_nvmf.a 00:06:00.868 CC module/event/subsystems/iscsi/iscsi.o 00:06:00.868 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:01.126 LIB libspdk_event_vhost_scsi.a 00:06:01.126 LIB libspdk_event_iscsi.a 00:06:01.384 CXX app/trace/trace.o 00:06:01.384 CC app/trace_record/trace_record.o 00:06:01.640 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:01.640 CC app/nvmf_tgt/nvmf_main.o 00:06:01.640 CC examples/util/zipf/zipf.o 00:06:01.640 CC app/iscsi_tgt/iscsi_tgt.o 00:06:01.640 CC examples/ioat/perf/perf.o 00:06:01.640 CC test/thread/poller_perf/poller_perf.o 00:06:01.640 CC app/spdk_tgt/spdk_tgt.o 00:06:01.640 CC test/dma/test_dma/test_dma.o 00:06:01.640 LINK nvmf_tgt 00:06:01.640 LINK interrupt_tgt 00:06:01.898 LINK zipf 00:06:01.898 LINK poller_perf 00:06:01.898 LINK iscsi_tgt 00:06:01.898 LINK ioat_perf 00:06:01.898 LINK spdk_trace_record 00:06:01.898 LINK spdk_tgt 00:06:02.158 LINK spdk_trace 00:06:02.417 LINK test_dma 00:06:02.676 CC examples/ioat/verify/verify.o 00:06:02.934 CC test/thread/lock/spdk_lock.o 00:06:02.934 CC examples/sock/hello_world/hello_sock.o 00:06:02.934 CC examples/thread/thread/thread_ex.o 00:06:02.934 CC examples/vmd/lsvmd/lsvmd.o 00:06:02.934 LINK verify 00:06:03.192 CC test/app/bdev_svc/bdev_svc.o 00:06:03.192 LINK lsvmd 00:06:03.192 LINK hello_sock 00:06:03.451 LINK bdev_svc 00:06:03.451 LINK thread 00:06:03.710 CC examples/idxd/perf/perf.o 00:06:04.277 LINK idxd_perf 00:06:04.277 CC examples/vmd/led/led.o 00:06:04.536 CC app/spdk_lspci/spdk_lspci.o 00:06:04.536 CC app/spdk_nvme_perf/perf.o 00:06:04.536 LINK led 00:06:04.536 CC app/spdk_nvme_identify/identify.o 00:06:04.794 LINK spdk_lspci 00:06:04.794 CC app/spdk_nvme_discover/discovery_aer.o 00:06:04.794 CC app/spdk_top/spdk_top.o 00:06:05.050 LINK spdk_nvme_discover 00:06:05.050 LINK spdk_lock 00:06:05.307 CC app/vhost/vhost.o 00:06:05.564 LINK spdk_nvme_identify 00:06:05.823 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:05.823 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:05.823 LINK vhost 00:06:05.823 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:06.082 CC app/spdk_dd/spdk_dd.o 00:06:06.082 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:06.082 CC test/app/histogram_perf/histogram_perf.o 00:06:06.082 LINK spdk_nvme_perf 00:06:06.341 LINK nvme_fuzz 00:06:06.341 CC examples/nvme/hello_world/hello_world.o 00:06:06.341 LINK histogram_perf 00:06:06.599 LINK spdk_dd 00:06:06.599 LINK spdk_top 00:06:06.599 CC examples/nvme/reconnect/reconnect.o 00:06:06.599 LINK hello_world 00:06:06.857 LINK vhost_fuzz 00:06:07.114 LINK reconnect 00:06:07.114 CC app/fio/nvme/fio_plugin.o 00:06:07.371 TEST_HEADER include/spdk/accel.h 00:06:07.371 TEST_HEADER include/spdk/accel_module.h 00:06:07.371 TEST_HEADER include/spdk/assert.h 00:06:07.371 TEST_HEADER include/spdk/barrier.h 00:06:07.371 TEST_HEADER include/spdk/base64.h 00:06:07.371 TEST_HEADER include/spdk/bdev.h 00:06:07.371 TEST_HEADER include/spdk/bdev_module.h 00:06:07.371 TEST_HEADER include/spdk/bdev_zone.h 00:06:07.371 TEST_HEADER include/spdk/bit_array.h 00:06:07.371 TEST_HEADER include/spdk/bit_pool.h 00:06:07.371 TEST_HEADER include/spdk/blob.h 00:06:07.371 CC app/fio/bdev/fio_plugin.o 00:06:07.371 TEST_HEADER include/spdk/blob_bdev.h 00:06:07.371 TEST_HEADER include/spdk/blobfs.h 00:06:07.371 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:07.371 TEST_HEADER include/spdk/conf.h 00:06:07.371 TEST_HEADER include/spdk/config.h 00:06:07.371 TEST_HEADER include/spdk/cpuset.h 00:06:07.371 TEST_HEADER include/spdk/crc16.h 00:06:07.371 TEST_HEADER include/spdk/crc32.h 00:06:07.372 TEST_HEADER include/spdk/crc64.h 00:06:07.372 TEST_HEADER include/spdk/dif.h 00:06:07.372 TEST_HEADER include/spdk/dma.h 00:06:07.372 TEST_HEADER include/spdk/endian.h 00:06:07.372 TEST_HEADER include/spdk/env.h 00:06:07.372 TEST_HEADER include/spdk/env_dpdk.h 00:06:07.372 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:07.372 TEST_HEADER include/spdk/event.h 00:06:07.372 TEST_HEADER include/spdk/fd.h 00:06:07.372 TEST_HEADER include/spdk/fd_group.h 00:06:07.372 TEST_HEADER include/spdk/file.h 00:06:07.372 TEST_HEADER include/spdk/fsdev.h 00:06:07.372 TEST_HEADER include/spdk/fsdev_module.h 00:06:07.372 TEST_HEADER include/spdk/ftl.h 00:06:07.372 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:07.372 TEST_HEADER include/spdk/gpt_spec.h 00:06:07.372 TEST_HEADER include/spdk/hexlify.h 00:06:07.372 TEST_HEADER include/spdk/histogram_data.h 00:06:07.372 TEST_HEADER include/spdk/idxd.h 00:06:07.372 TEST_HEADER include/spdk/idxd_spec.h 00:06:07.372 TEST_HEADER include/spdk/init.h 00:06:07.372 TEST_HEADER include/spdk/ioat.h 00:06:07.372 TEST_HEADER include/spdk/ioat_spec.h 00:06:07.372 TEST_HEADER include/spdk/iscsi_spec.h 00:06:07.372 TEST_HEADER include/spdk/json.h 00:06:07.372 TEST_HEADER include/spdk/jsonrpc.h 00:06:07.372 TEST_HEADER include/spdk/keyring.h 00:06:07.372 TEST_HEADER include/spdk/keyring_module.h 00:06:07.372 TEST_HEADER include/spdk/likely.h 00:06:07.372 TEST_HEADER include/spdk/log.h 00:06:07.372 TEST_HEADER include/spdk/lvol.h 00:06:07.372 TEST_HEADER include/spdk/md5.h 00:06:07.372 TEST_HEADER include/spdk/memory.h 00:06:07.372 TEST_HEADER include/spdk/mmio.h 00:06:07.372 TEST_HEADER include/spdk/nbd.h 00:06:07.372 TEST_HEADER include/spdk/net.h 00:06:07.372 TEST_HEADER include/spdk/notify.h 00:06:07.372 TEST_HEADER include/spdk/nvme.h 00:06:07.372 TEST_HEADER include/spdk/nvme_intel.h 00:06:07.372 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:07.372 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:07.372 TEST_HEADER include/spdk/nvme_spec.h 00:06:07.372 TEST_HEADER include/spdk/nvme_zns.h 00:06:07.372 TEST_HEADER include/spdk/nvmf.h 00:06:07.372 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:07.372 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:07.372 TEST_HEADER include/spdk/nvmf_spec.h 00:06:07.372 TEST_HEADER include/spdk/nvmf_transport.h 00:06:07.372 TEST_HEADER include/spdk/opal.h 00:06:07.372 TEST_HEADER include/spdk/opal_spec.h 00:06:07.372 TEST_HEADER include/spdk/pci_ids.h 00:06:07.372 TEST_HEADER include/spdk/pipe.h 00:06:07.372 TEST_HEADER include/spdk/queue.h 00:06:07.372 TEST_HEADER include/spdk/reduce.h 00:06:07.372 TEST_HEADER include/spdk/rpc.h 00:06:07.372 TEST_HEADER include/spdk/scheduler.h 00:06:07.372 TEST_HEADER include/spdk/scsi.h 00:06:07.372 TEST_HEADER include/spdk/scsi_spec.h 00:06:07.372 TEST_HEADER include/spdk/sock.h 00:06:07.372 TEST_HEADER include/spdk/stdinc.h 00:06:07.372 TEST_HEADER include/spdk/string.h 00:06:07.372 TEST_HEADER include/spdk/thread.h 00:06:07.372 TEST_HEADER include/spdk/trace.h 00:06:07.372 TEST_HEADER include/spdk/trace_parser.h 00:06:07.372 TEST_HEADER include/spdk/tree.h 00:06:07.372 TEST_HEADER include/spdk/ublk.h 00:06:07.372 TEST_HEADER include/spdk/util.h 00:06:07.372 TEST_HEADER include/spdk/uuid.h 00:06:07.372 TEST_HEADER include/spdk/version.h 00:06:07.630 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:07.630 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:07.630 TEST_HEADER include/spdk/vhost.h 00:06:07.630 TEST_HEADER include/spdk/vmd.h 00:06:07.630 TEST_HEADER include/spdk/xor.h 00:06:07.630 TEST_HEADER include/spdk/zipf.h 00:06:07.630 CXX test/cpp_headers/accel.o 00:06:07.630 CC examples/nvme/arbitration/arbitration.o 00:06:07.887 CXX test/cpp_headers/accel_module.o 00:06:07.887 CC examples/nvme/hotplug/hotplug.o 00:06:07.887 CXX test/cpp_headers/assert.o 00:06:08.144 LINK arbitration 00:06:08.144 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:08.402 LINK hotplug 00:06:08.402 CC examples/nvme/abort/abort.o 00:06:08.402 CXX test/cpp_headers/barrier.o 00:06:08.402 LINK nvme_manage 00:06:08.402 LINK spdk_nvme 00:06:08.402 CXX test/cpp_headers/base64.o 00:06:08.661 LINK spdk_bdev 00:06:08.661 LINK cmb_copy 00:06:08.661 CXX test/cpp_headers/bdev.o 00:06:08.661 LINK abort 00:06:08.661 LINK iscsi_fuzz 00:06:08.919 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:09.206 CXX test/cpp_headers/bdev_module.o 00:06:09.206 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:09.206 CXX test/cpp_headers/bdev_zone.o 00:06:09.495 CXX test/cpp_headers/bit_array.o 00:06:09.495 LINK hello_fsdev 00:06:09.495 CC test/app/jsoncat/jsoncat.o 00:06:09.753 CXX test/cpp_headers/bit_pool.o 00:06:09.753 CXX test/cpp_headers/blob.o 00:06:09.753 CXX test/cpp_headers/blob_bdev.o 00:06:09.753 CC test/app/stub/stub.o 00:06:09.753 LINK pmr_persistence 00:06:09.753 LINK jsoncat 00:06:10.012 CC test/env/vtophys/vtophys.o 00:06:10.012 CC test/env/mem_callbacks/mem_callbacks.o 00:06:10.012 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:10.012 CXX test/cpp_headers/blobfs.o 00:06:10.012 LINK stub 00:06:10.271 CC test/event/event_perf/event_perf.o 00:06:10.271 LINK vtophys 00:06:10.271 LINK env_dpdk_post_init 00:06:10.271 CXX test/cpp_headers/blobfs_bdev.o 00:06:10.271 LINK event_perf 00:06:10.530 CXX test/cpp_headers/conf.o 00:06:10.790 CXX test/cpp_headers/config.o 00:06:10.790 CC test/env/memory/memory_ut.o 00:06:10.790 CXX test/cpp_headers/cpuset.o 00:06:11.049 LINK mem_callbacks 00:06:11.049 CC test/env/pci/pci_ut.o 00:06:11.049 CXX test/cpp_headers/crc16.o 00:06:11.049 CC test/event/reactor/reactor.o 00:06:11.049 CC test/event/reactor_perf/reactor_perf.o 00:06:11.307 LINK reactor 00:06:11.307 CC test/event/app_repeat/app_repeat.o 00:06:11.307 LINK reactor_perf 00:06:11.307 CXX test/cpp_headers/crc32.o 00:06:11.307 CXX test/cpp_headers/crc64.o 00:06:11.307 CC examples/accel/perf/accel_perf.o 00:06:11.566 LINK app_repeat 00:06:11.566 CC examples/blob/hello_world/hello_blob.o 00:06:11.566 LINK pci_ut 00:06:11.566 CC test/rpc_client/rpc_client_test.o 00:06:11.566 CXX test/cpp_headers/dif.o 00:06:11.566 CC test/nvme/aer/aer.o 00:06:11.825 LINK rpc_client_test 00:06:11.825 CXX test/cpp_headers/dma.o 00:06:11.825 LINK hello_blob 00:06:11.825 CXX test/cpp_headers/endian.o 00:06:12.083 CC test/event/scheduler/scheduler.o 00:06:12.083 CC examples/blob/cli/blobcli.o 00:06:12.083 CXX test/cpp_headers/env.o 00:06:12.083 LINK aer 00:06:12.083 LINK accel_perf 00:06:12.343 LINK scheduler 00:06:12.343 LINK memory_ut 00:06:12.343 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:06:12.343 CC test/nvme/reset/reset.o 00:06:12.343 CXX test/cpp_headers/env_dpdk.o 00:06:12.603 LINK blobcli 00:06:12.603 LINK histogram_ut 00:06:12.603 CXX test/cpp_headers/event.o 00:06:12.603 CXX test/cpp_headers/fd.o 00:06:12.903 LINK reset 00:06:12.903 CC test/accel/dif/dif.o 00:06:12.903 CXX test/cpp_headers/fd_group.o 00:06:13.161 CC test/nvme/sgl/sgl.o 00:06:13.162 CC test/unit/lib/log/log.c/log_ut.o 00:06:13.162 CC test/blobfs/mkfs/mkfs.o 00:06:13.162 CXX test/cpp_headers/file.o 00:06:13.162 CC test/lvol/esnap/esnap.o 00:06:13.420 CXX test/cpp_headers/fsdev.o 00:06:13.420 LINK mkfs 00:06:13.420 LINK sgl 00:06:13.679 CXX test/cpp_headers/fsdev_module.o 00:06:13.679 CXX test/cpp_headers/ftl.o 00:06:13.938 CXX test/cpp_headers/fuse_dispatcher.o 00:06:14.196 LINK log_ut 00:06:14.196 CXX test/cpp_headers/gpt_spec.o 00:06:14.196 CXX test/cpp_headers/hexlify.o 00:06:14.196 CXX test/cpp_headers/histogram_data.o 00:06:14.196 LINK dif 00:06:14.196 CXX test/cpp_headers/idxd.o 00:06:14.455 CXX test/cpp_headers/idxd_spec.o 00:06:14.455 CC test/nvme/e2edp/nvme_dp.o 00:06:14.455 CC test/unit/lib/rdma/common.c/common_ut.o 00:06:14.455 CC test/unit/lib/util/base64.c/base64_ut.o 00:06:14.455 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:06:14.455 CXX test/cpp_headers/init.o 00:06:14.713 CC examples/bdev/hello_world/hello_bdev.o 00:06:14.713 CC examples/bdev/bdevperf/bdevperf.o 00:06:14.713 LINK base64_ut 00:06:14.713 CXX test/cpp_headers/ioat.o 00:06:14.713 LINK nvme_dp 00:06:14.973 CXX test/cpp_headers/ioat_spec.o 00:06:14.973 LINK hello_bdev 00:06:14.973 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:06:14.973 CXX test/cpp_headers/iscsi_spec.o 00:06:15.233 LINK common_ut 00:06:15.233 LINK bit_array_ut 00:06:15.233 CXX test/cpp_headers/json.o 00:06:15.233 LINK cpuset_ut 00:06:15.233 CC test/unit/lib/dma/dma.c/dma_ut.o 00:06:15.491 CXX test/cpp_headers/jsonrpc.o 00:06:15.491 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:06:15.491 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:06:15.491 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:06:15.491 CC test/nvme/overhead/overhead.o 00:06:16.058 CXX test/cpp_headers/keyring.o 00:06:16.058 LINK crc16_ut 00:06:16.058 LINK crc32_ieee_ut 00:06:16.058 LINK overhead 00:06:16.058 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:06:16.058 CXX test/cpp_headers/keyring_module.o 00:06:16.316 LINK bdevperf 00:06:16.316 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:06:16.316 CC test/nvme/err_injection/err_injection.o 00:06:16.316 LINK ioat_ut 00:06:16.316 LINK crc32c_ut 00:06:16.316 LINK dma_ut 00:06:16.574 CXX test/cpp_headers/likely.o 00:06:16.574 LINK crc64_ut 00:06:16.574 CXX test/cpp_headers/log.o 00:06:16.574 CC test/unit/lib/util/dif.c/dif_ut.o 00:06:16.832 CC test/nvme/startup/startup.o 00:06:16.832 CC test/bdev/bdevio/bdevio.o 00:06:16.832 CC test/unit/lib/util/file.c/file_ut.o 00:06:16.832 CC test/nvme/reserve/reserve.o 00:06:17.090 LINK file_ut 00:06:17.090 LINK err_injection 00:06:17.090 CXX test/cpp_headers/lvol.o 00:06:17.090 LINK startup 00:06:17.349 LINK reserve 00:06:17.349 CC test/unit/lib/util/iov.c/iov_ut.o 00:06:17.349 LINK bdevio 00:06:17.349 CXX test/cpp_headers/md5.o 00:06:17.349 CXX test/cpp_headers/memory.o 00:06:17.608 LINK iov_ut 00:06:17.608 CXX test/cpp_headers/mmio.o 00:06:17.867 CC test/unit/lib/util/math.c/math_ut.o 00:06:17.867 CXX test/cpp_headers/nbd.o 00:06:17.867 CC test/unit/lib/util/net.c/net_ut.o 00:06:17.867 CXX test/cpp_headers/net.o 00:06:17.867 LINK math_ut 00:06:18.124 CXX test/cpp_headers/notify.o 00:06:18.124 LINK net_ut 00:06:18.124 CC test/nvme/connect_stress/connect_stress.o 00:06:18.124 CC test/nvme/simple_copy/simple_copy.o 00:06:18.124 CC test/nvme/boot_partition/boot_partition.o 00:06:18.124 LINK dif_ut 00:06:18.124 CC examples/nvmf/nvmf/nvmf.o 00:06:18.124 CXX test/cpp_headers/nvme.o 00:06:18.124 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:06:18.124 CXX test/cpp_headers/nvme_intel.o 00:06:18.383 LINK connect_stress 00:06:18.383 LINK boot_partition 00:06:18.383 LINK simple_copy 00:06:18.383 CXX test/cpp_headers/nvme_ocssd.o 00:06:18.383 CC test/unit/lib/util/string.c/string_ut.o 00:06:18.383 CC test/nvme/compliance/nvme_compliance.o 00:06:18.383 LINK nvmf 00:06:18.641 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:18.641 LINK string_ut 00:06:18.901 CXX test/cpp_headers/nvme_spec.o 00:06:18.901 LINK nvme_compliance 00:06:18.901 CXX test/cpp_headers/nvme_zns.o 00:06:18.901 CC test/nvme/fused_ordering/fused_ordering.o 00:06:18.901 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:18.901 LINK pipe_ut 00:06:19.165 CC test/nvme/fdp/fdp.o 00:06:19.165 CC test/unit/lib/util/xor.c/xor_ut.o 00:06:19.165 CC test/unit/lib/util/fd_group.c/fd_group_ut.o 00:06:19.165 CXX test/cpp_headers/nvmf.o 00:06:19.165 LINK doorbell_aers 00:06:19.165 LINK fused_ordering 00:06:19.445 CC test/nvme/cuse/cuse.o 00:06:19.445 CXX test/cpp_headers/nvmf_cmd.o 00:06:19.445 LINK fd_group_ut 00:06:19.445 LINK fdp 00:06:19.704 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:19.704 CXX test/cpp_headers/nvmf_spec.o 00:06:19.704 CXX test/cpp_headers/nvmf_transport.o 00:06:19.704 LINK xor_ut 00:06:19.963 CXX test/cpp_headers/opal.o 00:06:19.963 CXX test/cpp_headers/opal_spec.o 00:06:19.963 CXX test/cpp_headers/pci_ids.o 00:06:19.963 CXX test/cpp_headers/pipe.o 00:06:19.963 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:06:20.223 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:06:20.223 CXX test/cpp_headers/queue.o 00:06:20.223 LINK esnap 00:06:20.223 CXX test/cpp_headers/reduce.o 00:06:20.223 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:06:20.223 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:06:20.223 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:06:20.223 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:06:20.482 CXX test/cpp_headers/rpc.o 00:06:20.482 CXX test/cpp_headers/scheduler.o 00:06:20.482 CXX test/cpp_headers/scsi.o 00:06:20.741 CXX test/cpp_headers/scsi_spec.o 00:06:20.741 CXX test/cpp_headers/sock.o 00:06:20.741 CXX test/cpp_headers/stdinc.o 00:06:20.741 LINK json_util_ut 00:06:20.741 LINK pci_event_ut 00:06:20.741 CXX test/cpp_headers/string.o 00:06:20.741 CXX test/cpp_headers/thread.o 00:06:20.741 LINK cuse 00:06:21.001 CXX test/cpp_headers/trace.o 00:06:21.001 CXX test/cpp_headers/trace_parser.o 00:06:21.001 CXX test/cpp_headers/tree.o 00:06:21.001 CXX test/cpp_headers/ublk.o 00:06:21.001 LINK idxd_user_ut 00:06:21.001 CXX test/cpp_headers/util.o 00:06:21.001 CXX test/cpp_headers/uuid.o 00:06:21.001 CXX test/cpp_headers/version.o 00:06:21.001 CXX test/cpp_headers/vfio_user_pci.o 00:06:21.001 CXX test/cpp_headers/vfio_user_spec.o 00:06:21.261 CXX test/cpp_headers/vhost.o 00:06:21.261 CXX test/cpp_headers/vmd.o 00:06:21.261 CXX test/cpp_headers/xor.o 00:06:21.261 CXX test/cpp_headers/zipf.o 00:06:21.261 LINK json_write_ut 00:06:21.519 LINK idxd_ut 00:06:22.909 LINK json_parse_ut 00:06:23.477 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:06:24.044 LINK jsonrpc_server_ut 00:06:24.612 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:06:25.601 LINK rpc_ut 00:06:25.859 CC test/unit/lib/sock/posix.c/posix_ut.o 00:06:25.859 CC test/unit/lib/sock/sock.c/sock_ut.o 00:06:25.859 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:06:25.859 CC test/unit/lib/thread/thread.c/thread_ut.o 00:06:26.119 CC test/unit/lib/notify/notify.c/notify_ut.o 00:06:26.119 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:06:26.687 LINK keyring_ut 00:06:27.359 LINK notify_ut 00:06:27.359 LINK iobuf_ut 00:06:27.619 LINK posix_ut 00:06:28.187 LINK sock_ut 00:06:28.754 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:06:28.754 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:06:28.754 LINK thread_ut 00:06:28.754 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:06:29.013 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:06:29.948 LINK nvme_ns_ut 00:06:30.206 LINK nvme_ctrlr_ocssd_cmd_ut 00:06:30.206 LINK nvme_poll_group_ut 00:06:30.206 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:06:30.206 LINK nvme_ctrlr_cmd_ut 00:06:30.465 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:06:30.465 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:06:30.465 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:06:30.724 LINK nvme_ut 00:06:30.724 LINK nvme_qpair_ut 00:06:30.724 LINK nvme_ns_ocssd_cmd_ut 00:06:30.983 LINK nvme_quirks_ut 00:06:30.983 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:06:30.983 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:06:31.241 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:06:31.241 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:06:31.241 LINK nvme_ns_cmd_ut 00:06:31.501 LINK nvme_pcie_ut 00:06:31.501 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:06:31.759 LINK nvme_transport_ut 00:06:32.063 CC test/unit/lib/accel/accel.c/accel_ut.o 00:06:32.063 LINK nvme_io_msg_ut 00:06:32.063 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:06:32.324 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:06:32.324 LINK nvme_opal_ut 00:06:32.324 LINK nvme_fabric_ut 00:06:32.585 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:06:32.585 CC test/unit/lib/blob/blob.c/blob_ut.o 00:06:32.846 LINK nvme_ctrlr_ut 00:06:32.846 LINK nvme_pcie_common_ut 00:06:33.106 LINK blob_bdev_ut 00:06:33.107 LINK rpc_ut 00:06:33.107 CC test/unit/lib/fsdev/fsdev.c/fsdev_ut.o 00:06:33.107 LINK subsystem_ut 00:06:33.675 CC test/unit/lib/event/app.c/app_ut.o 00:06:33.675 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:06:34.245 LINK nvme_cuse_ut 00:06:34.245 LINK nvme_tcp_ut 00:06:34.505 LINK nvme_rdma_ut 00:06:34.505 LINK fsdev_ut 00:06:34.765 LINK app_ut 00:06:35.350 LINK reactor_ut 00:06:35.350 LINK accel_ut 00:06:35.920 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:06:35.920 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:06:35.920 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:06:35.920 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:06:35.920 CC test/unit/lib/bdev/part.c/part_ut.o 00:06:35.920 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:06:36.179 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:06:36.179 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:06:36.179 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:06:36.179 LINK scsi_nvme_ut 00:06:36.439 LINK bdev_zone_ut 00:06:36.700 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:06:36.700 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:06:36.981 LINK gpt_ut 00:06:37.239 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:06:37.239 LINK vbdev_zone_block_ut 00:06:37.497 LINK bdev_raid_sb_ut 00:06:37.497 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:06:38.063 LINK vbdev_lvol_ut 00:06:38.063 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:06:38.063 LINK concat_ut 00:06:38.630 LINK raid1_ut 00:06:38.888 LINK bdev_raid_ut 00:06:39.181 LINK raid0_ut 00:06:41.098 LINK part_ut 00:06:41.664 LINK bdev_ut 00:06:42.234 LINK blob_ut 00:06:42.802 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:06:42.802 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:06:42.802 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:06:42.802 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:06:42.802 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:06:43.062 LINK blobfs_bdev_ut 00:06:43.062 LINK tree_ut 00:06:44.000 LINK bdev_nvme_ut 00:06:44.000 LINK bdev_ut 00:06:44.569 LINK blobfs_sync_ut 00:06:44.569 LINK blobfs_async_ut 00:06:44.569 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:06:44.569 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:06:44.827 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:06:44.827 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:06:44.827 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:06:44.828 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:06:44.828 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:06:45.086 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:06:45.086 LINK ftl_bitmap_ut 00:06:45.086 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:06:45.345 LINK dev_ut 00:06:45.345 LINK ftl_l2p_ut 00:06:45.345 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:06:45.603 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:06:45.603 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:06:45.863 LINK lvol_ut 00:06:45.863 LINK ftl_io_ut 00:06:46.206 LINK ftl_mempool_ut 00:06:46.206 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:06:46.206 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:06:46.466 LINK ftl_p2l_ut 00:06:46.466 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:06:46.725 LINK ftl_mngt_ut 00:06:46.725 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:06:46.725 LINK lun_ut 00:06:46.983 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:06:46.983 LINK ftl_band_ut 00:06:46.983 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:06:47.241 LINK scsi_ut 00:06:47.241 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:06:47.809 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:06:48.378 LINK scsi_pr_ut 00:06:48.378 LINK ctrlr_bdev_ut 00:06:48.378 LINK ftl_sb_ut 00:06:48.637 LINK subsystem_ut 00:06:48.637 LINK ftl_layout_upgrade_ut 00:06:48.637 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:06:48.897 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:06:48.897 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:06:49.464 LINK scsi_bdev_ut 00:06:49.723 LINK ctrlr_discovery_ut 00:06:50.033 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:06:50.033 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:06:50.033 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:06:50.033 LINK ctrlr_ut 00:06:50.308 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:06:50.308 CC test/unit/lib/iscsi/param.c/param_ut.o 00:06:50.566 LINK nvmf_ut 00:06:50.566 LINK init_grp_ut 00:06:50.825 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:06:51.084 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:06:51.084 LINK param_ut 00:06:51.651 LINK auth_ut 00:06:51.961 LINK tcp_ut 00:06:52.239 LINK conn_ut 00:06:52.499 LINK tgt_node_ut 00:06:52.499 LINK portal_grp_ut 00:06:53.454 LINK transport_ut 00:06:53.454 LINK iscsi_ut 00:06:53.713 LINK rdma_ut 00:06:53.971 LINK vhost_ut 00:06:54.232 00:06:54.232 real 2m30.880s 00:06:54.232 user 12m2.891s 00:06:54.232 sys 2m45.974s 00:06:54.232 07:16:58 unittest_build -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:54.232 07:16:58 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:06:54.232 ************************************ 00:06:54.232 END TEST unittest_build 00:06:54.232 ************************************ 00:06:54.232 07:16:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:54.232 07:16:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:54.232 07:16:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:54.232 07:16:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.232 07:16:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:54.232 07:16:58 -- pm/common@44 -- $ pid=2444 00:06:54.232 07:16:58 -- pm/common@50 -- $ kill -TERM 2444 00:06:54.232 07:16:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.232 07:16:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:54.232 07:16:58 -- pm/common@44 -- $ pid=2446 00:06:54.232 07:16:58 -- pm/common@50 -- $ kill -TERM 2446 00:06:54.232 07:16:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:54.232 07:16:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:54.495 07:16:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.495 07:16:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.495 07:16:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.495 07:16:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.495 07:16:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.495 07:16:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.495 07:16:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.495 07:16:58 -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.495 07:16:58 -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.495 07:16:58 -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.495 07:16:58 -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.495 07:16:58 -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.495 07:16:58 -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.495 07:16:58 -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.495 07:16:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.495 07:16:58 -- scripts/common.sh@344 -- # case "$op" in 00:06:54.495 07:16:58 -- scripts/common.sh@345 -- # : 1 00:06:54.495 07:16:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.495 07:16:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.495 07:16:58 -- scripts/common.sh@365 -- # decimal 1 00:06:54.495 07:16:58 -- scripts/common.sh@353 -- # local d=1 00:06:54.495 07:16:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.495 07:16:58 -- scripts/common.sh@355 -- # echo 1 00:06:54.495 07:16:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.495 07:16:58 -- scripts/common.sh@366 -- # decimal 2 00:06:54.495 07:16:58 -- scripts/common.sh@353 -- # local d=2 00:06:54.495 07:16:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.495 07:16:58 -- scripts/common.sh@355 -- # echo 2 00:06:54.495 07:16:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.495 07:16:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.495 07:16:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.495 07:16:58 -- scripts/common.sh@368 -- # return 0 00:06:54.495 07:16:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.495 07:16:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.495 --rc genhtml_branch_coverage=1 00:06:54.495 --rc genhtml_function_coverage=1 00:06:54.495 --rc genhtml_legend=1 00:06:54.495 --rc geninfo_all_blocks=1 00:06:54.495 --rc geninfo_unexecuted_blocks=1 00:06:54.495 00:06:54.495 ' 00:06:54.495 07:16:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.495 --rc genhtml_branch_coverage=1 00:06:54.495 --rc genhtml_function_coverage=1 00:06:54.495 --rc genhtml_legend=1 00:06:54.495 --rc geninfo_all_blocks=1 00:06:54.495 --rc geninfo_unexecuted_blocks=1 00:06:54.495 00:06:54.495 ' 00:06:54.495 07:16:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.495 --rc genhtml_branch_coverage=1 00:06:54.495 --rc genhtml_function_coverage=1 00:06:54.495 --rc genhtml_legend=1 00:06:54.495 --rc geninfo_all_blocks=1 00:06:54.495 --rc geninfo_unexecuted_blocks=1 00:06:54.495 00:06:54.496 ' 00:06:54.496 07:16:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.496 --rc genhtml_branch_coverage=1 00:06:54.496 --rc genhtml_function_coverage=1 00:06:54.496 --rc genhtml_legend=1 00:06:54.496 --rc geninfo_all_blocks=1 00:06:54.496 --rc geninfo_unexecuted_blocks=1 00:06:54.496 00:06:54.496 ' 00:06:54.496 07:16:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.496 07:16:58 -- nvmf/common.sh@7 -- # uname -s 00:06:54.496 07:16:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.496 07:16:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.496 07:16:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.496 07:16:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.496 07:16:58 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.496 07:16:58 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:54.496 07:16:58 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.496 07:16:58 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:54.496 07:16:58 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:06:54.496 07:16:58 -- nvmf/common.sh@16 -- # NVME_HOSTID=d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:06:54.496 07:16:58 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.496 07:16:58 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:54.496 07:16:58 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:54.496 07:16:58 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.496 07:16:58 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.496 07:16:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.496 07:16:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.496 07:16:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.496 07:16:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.496 07:16:58 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:54.496 07:16:58 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:54.496 07:16:58 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:54.496 07:16:58 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:54.496 07:16:58 -- paths/export.sh@6 -- # export PATH 00:06:54.496 07:16:58 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:54.496 07:16:58 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:54.496 07:16:58 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:54.496 07:16:58 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:54.496 07:16:58 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:54.496 07:16:58 -- nvmf/common.sh@50 -- # : 0 00:06:54.496 07:16:58 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:54.496 07:16:58 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:54.496 07:16:58 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:54.496 07:16:58 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.496 07:16:58 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.496 07:16:58 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:54.496 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:54.496 07:16:58 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:54.496 07:16:58 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:54.496 07:16:58 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:54.496 07:16:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:54.496 07:16:58 -- spdk/autotest.sh@32 -- # uname -s 00:06:54.496 07:16:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:54.496 07:16:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:06:54.496 07:16:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:54.496 07:16:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:54.496 07:16:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:54.496 07:16:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:54.496 07:16:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:54.496 07:16:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:06:54.496 07:16:58 -- spdk/autotest.sh@48 -- # udevadm_pid=60581 00:06:54.496 07:16:58 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:06:54.496 07:16:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:54.496 07:16:58 -- pm/common@17 -- # local monitor 00:06:54.496 07:16:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.496 07:16:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.496 07:16:58 -- pm/common@25 -- # sleep 1 00:06:54.496 07:16:58 -- pm/common@21 -- # date +%s 00:06:54.496 07:16:58 -- pm/common@21 -- # date +%s 00:06:54.496 07:16:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732087018 00:06:54.496 07:16:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732087018 00:06:54.496 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732087018_collect-vmstat.pm.log 00:06:54.761 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732087018_collect-cpu-load.pm.log 00:06:55.763 07:16:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:55.763 07:16:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:55.763 07:16:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.763 07:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:55.763 07:16:59 -- spdk/autotest.sh@59 -- # create_test_list 00:06:55.763 07:16:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:55.763 07:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:55.763 07:16:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:55.763 07:16:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:55.763 07:16:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:55.763 07:16:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:55.763 07:16:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:55.763 07:16:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:55.763 07:16:59 -- common/autotest_common.sh@1457 -- # uname 00:06:55.763 07:16:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:55.763 07:16:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:55.763 07:16:59 -- common/autotest_common.sh@1477 -- # uname 00:06:55.763 07:16:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:55.763 07:16:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:55.763 07:16:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:55.763 lcov: LCOV version 1.15 00:06:55.763 07:16:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:02.345 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:02.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:10.042 07:18:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:10.042 07:18:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.042 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:10.042 07:18:04 -- spdk/autotest.sh@78 -- # rm -f 00:08:10.042 07:18:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:08:10.042 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:10.042 07:18:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:10.042 07:18:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:10.042 07:18:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:10.042 07:18:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:10.042 07:18:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:10.042 07:18:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:10.042 07:18:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:10.042 07:18:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:10.042 07:18:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:10.042 07:18:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:10.042 07:18:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:10.042 07:18:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:10.042 07:18:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:10.042 07:18:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:10.042 07:18:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:10.042 No valid GPT data, bailing 00:08:10.042 07:18:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:10.042 07:18:04 -- scripts/common.sh@394 -- # pt= 00:08:10.042 07:18:04 -- scripts/common.sh@395 -- # return 1 00:08:10.042 07:18:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:10.042 1+0 records in 00:08:10.042 1+0 records out 00:08:10.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759098 s, 138 MB/s 00:08:10.042 07:18:04 -- spdk/autotest.sh@105 -- # sync 00:08:10.042 07:18:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:10.042 07:18:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:10.042 07:18:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:10.042 07:18:07 -- spdk/autotest.sh@111 -- # uname -s 00:08:10.042 07:18:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:10.042 07:18:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:10.042 07:18:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:10.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:08:10.042 Hugepages 00:08:10.042 node hugesize free / total 00:08:10.042 node0 1048576kB 0 / 0 00:08:10.042 node0 2048kB 0 / 0 00:08:10.042 00:08:10.042 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:10.042 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:10.042 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:10.042 07:18:08 -- spdk/autotest.sh@117 -- # uname -s 00:08:10.042 07:18:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:10.042 07:18:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:10.042 07:18:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:10.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:08:10.042 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.042 07:18:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:10.043 07:18:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:10.043 07:18:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:10.043 07:18:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.043 07:18:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:10.043 07:18:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.043 07:18:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.043 07:18:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.043 07:18:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.043 07:18:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.043 07:18:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:10.043 07:18:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:08:10.043 07:18:10 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:08:10.043 Waiting for block devices as requested 00:08:10.043 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.043 07:18:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:10.043 07:18:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:10.043 07:18:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:08:10.043 07:18:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:10.043 07:18:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:10.043 07:18:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:10.043 07:18:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:10.043 07:18:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:10.043 07:18:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:10.043 07:18:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:10.043 07:18:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:10.043 07:18:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:10.043 07:18:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:10.043 07:18:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:10.043 07:18:11 -- common/autotest_common.sh@1543 -- # continue 00:08:10.043 07:18:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:10.043 07:18:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.043 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.043 07:18:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:10.043 07:18:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.043 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.043 07:18:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:10.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:08:10.043 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.043 07:18:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:10.043 07:18:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.043 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.043 07:18:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:10.043 07:18:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:10.043 07:18:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:10.043 07:18:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:10.043 07:18:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:10.043 07:18:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:10.043 07:18:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:10.043 07:18:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:10.043 07:18:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.043 07:18:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.043 07:18:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.043 07:18:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.043 07:18:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.043 07:18:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:10.043 07:18:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:08:10.043 07:18:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:10.043 07:18:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:10.043 07:18:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:10.043 07:18:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:10.043 07:18:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:10.043 07:18:12 -- common/autotest_common.sh@1572 -- # return 0 00:08:10.043 07:18:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:10.043 07:18:12 -- common/autotest_common.sh@1580 -- # return 0 00:08:10.043 07:18:12 -- spdk/autotest.sh@137 -- # '[' 1 -eq 1 ']' 00:08:10.043 07:18:12 -- spdk/autotest.sh@138 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:08:10.043 07:18:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.043 07:18:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.043 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.043 ************************************ 00:08:10.043 START TEST unittest 00:08:10.043 ************************************ 00:08:10.043 07:18:12 unittest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:08:10.043 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:08:10.043 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:08:10.043 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:08:10.043 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:08:10.043 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:08:10.043 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:10.043 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:10.043 ++ rpc_py=rpc_cmd 00:08:10.043 ++ set -e 00:08:10.043 ++ shopt -s nullglob 00:08:10.043 ++ shopt -s extglob 00:08:10.043 ++ shopt -s inherit_errexit 00:08:10.043 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:10.043 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:10.043 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:10.043 +++ CONFIG_WPDK_DIR= 00:08:10.043 +++ CONFIG_ASAN=y 00:08:10.043 +++ CONFIG_VBDEV_COMPRESS=n 00:08:10.043 +++ CONFIG_HAVE_EXECINFO_H=y 00:08:10.043 +++ CONFIG_USDT=n 00:08:10.043 +++ CONFIG_CUSTOMOCF=n 00:08:10.043 +++ CONFIG_PREFIX=/usr/local 00:08:10.043 +++ CONFIG_RBD=n 00:08:10.043 +++ CONFIG_LIBDIR= 00:08:10.043 +++ CONFIG_IDXD=y 00:08:10.043 +++ CONFIG_NVME_CUSE=y 00:08:10.043 +++ CONFIG_SMA=n 00:08:10.043 +++ CONFIG_VTUNE=n 00:08:10.043 +++ CONFIG_TSAN=n 00:08:10.043 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:10.043 +++ CONFIG_VFIO_USER_DIR= 00:08:10.043 +++ CONFIG_MAX_NUMA_NODES=1 00:08:10.043 +++ CONFIG_PGO_CAPTURE=n 00:08:10.043 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:10.043 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:10.043 +++ CONFIG_LTO=n 00:08:10.043 +++ CONFIG_ISCSI_INITIATOR=y 00:08:10.043 +++ CONFIG_CET=n 00:08:10.043 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:10.043 +++ CONFIG_OCF_PATH= 00:08:10.043 +++ CONFIG_RDMA_SET_TOS=y 00:08:10.043 +++ CONFIG_AIO_FSDEV=y 00:08:10.043 +++ CONFIG_HAVE_ARC4RANDOM=y 00:08:10.043 +++ CONFIG_HAVE_LIBARCHIVE=n 00:08:10.043 +++ CONFIG_UBLK=y 00:08:10.043 +++ CONFIG_ISAL_CRYPTO=y 00:08:10.043 +++ CONFIG_OPENSSL_PATH= 00:08:10.043 +++ CONFIG_OCF=n 00:08:10.043 +++ CONFIG_FUSE=n 00:08:10.043 +++ CONFIG_VTUNE_DIR= 00:08:10.043 +++ CONFIG_FUZZER_LIB= 00:08:10.043 +++ CONFIG_FUZZER=n 00:08:10.043 +++ CONFIG_FSDEV=y 00:08:10.043 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:10.043 +++ CONFIG_CRYPTO=n 00:08:10.043 +++ CONFIG_PGO_USE=n 00:08:10.043 +++ CONFIG_VHOST=y 00:08:10.043 +++ CONFIG_DAOS=n 00:08:10.043 +++ CONFIG_DPDK_INC_DIR= 00:08:10.043 +++ CONFIG_DAOS_DIR= 00:08:10.043 +++ CONFIG_UNIT_TESTS=y 00:08:10.043 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:10.043 +++ CONFIG_VIRTIO=y 00:08:10.043 +++ CONFIG_DPDK_UADK=n 00:08:10.043 +++ CONFIG_COVERAGE=y 00:08:10.043 +++ CONFIG_RDMA=y 00:08:10.043 +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:10.043 +++ CONFIG_HAVE_LZ4=n 00:08:10.043 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:10.043 +++ CONFIG_URING_PATH= 00:08:10.044 +++ CONFIG_XNVME=n 00:08:10.044 +++ CONFIG_VFIO_USER=n 00:08:10.044 +++ CONFIG_ARCH=native 00:08:10.044 +++ CONFIG_HAVE_EVP_MAC=y 00:08:10.044 +++ CONFIG_URING_ZNS=n 00:08:10.044 +++ CONFIG_WERROR=y 00:08:10.044 +++ CONFIG_HAVE_LIBBSD=n 00:08:10.044 +++ CONFIG_UBSAN=y 00:08:10.044 +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:10.044 +++ CONFIG_IPSEC_MB_DIR= 00:08:10.044 +++ CONFIG_GOLANG=n 00:08:10.044 +++ CONFIG_ISAL=y 00:08:10.044 +++ CONFIG_IDXD_KERNEL=y 00:08:10.044 +++ CONFIG_DPDK_LIB_DIR= 00:08:10.044 +++ CONFIG_RDMA_PROV=verbs 00:08:10.044 +++ CONFIG_APPS=y 00:08:10.044 +++ CONFIG_SHARED=n 00:08:10.044 +++ CONFIG_HAVE_KEYUTILS=y 00:08:10.044 +++ CONFIG_FC_PATH= 00:08:10.044 +++ CONFIG_DPDK_PKG_CONFIG=n 00:08:10.044 +++ CONFIG_FC=n 00:08:10.044 +++ CONFIG_AVAHI=n 00:08:10.044 +++ CONFIG_FIO_PLUGIN=y 00:08:10.044 +++ CONFIG_RAID5F=n 00:08:10.044 +++ CONFIG_EXAMPLES=y 00:08:10.044 +++ CONFIG_TESTS=y 00:08:10.044 +++ CONFIG_CRYPTO_MLX5=n 00:08:10.044 +++ CONFIG_MAX_LCORES=128 00:08:10.044 +++ CONFIG_IPSEC_MB=n 00:08:10.044 +++ CONFIG_PGO_DIR= 00:08:10.044 +++ CONFIG_DEBUG=y 00:08:10.044 +++ CONFIG_DPDK_COMPRESSDEV=n 00:08:10.044 +++ CONFIG_CROSS_PREFIX= 00:08:10.044 +++ CONFIG_COPY_FILE_RANGE=y 00:08:10.044 +++ CONFIG_URING=n 00:08:10.044 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:10.044 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:10.044 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:10.044 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:10.044 +++ _root=/home/vagrant/spdk_repo/spdk 00:08:10.044 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:10.044 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:10.044 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:10.044 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:10.044 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:10.044 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:10.044 +++ VHOST_APP=("$_app_dir/vhost") 00:08:10.044 +++ DD_APP=("$_app_dir/spdk_dd") 00:08:10.044 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:08:10.044 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:10.044 +++ [[ #ifndef SPDK_CONFIG_H 00:08:10.044 #define SPDK_CONFIG_H 00:08:10.044 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:10.044 #define SPDK_CONFIG_APPS 1 00:08:10.044 #define SPDK_CONFIG_ARCH native 00:08:10.044 #define SPDK_CONFIG_ASAN 1 00:08:10.044 #undef SPDK_CONFIG_AVAHI 00:08:10.044 #undef SPDK_CONFIG_CET 00:08:10.044 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:10.044 #define SPDK_CONFIG_COVERAGE 1 00:08:10.044 #define SPDK_CONFIG_CROSS_PREFIX 00:08:10.044 #undef SPDK_CONFIG_CRYPTO 00:08:10.044 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:10.044 #undef SPDK_CONFIG_CUSTOMOCF 00:08:10.044 #undef SPDK_CONFIG_DAOS 00:08:10.044 #define SPDK_CONFIG_DAOS_DIR 00:08:10.044 #define SPDK_CONFIG_DEBUG 1 00:08:10.044 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:10.044 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:10.044 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:10.044 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:10.044 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:10.044 #undef SPDK_CONFIG_DPDK_UADK 00:08:10.044 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:10.044 #define SPDK_CONFIG_EXAMPLES 1 00:08:10.044 #undef SPDK_CONFIG_FC 00:08:10.044 #define SPDK_CONFIG_FC_PATH 00:08:10.044 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:10.044 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:10.044 #define SPDK_CONFIG_FSDEV 1 00:08:10.044 #undef SPDK_CONFIG_FUSE 00:08:10.044 #undef SPDK_CONFIG_FUZZER 00:08:10.044 #define SPDK_CONFIG_FUZZER_LIB 00:08:10.044 #undef SPDK_CONFIG_GOLANG 00:08:10.044 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:10.044 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:10.044 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:10.044 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:10.044 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:10.044 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:10.044 #undef SPDK_CONFIG_HAVE_LZ4 00:08:10.044 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:10.044 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:10.044 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:10.044 #define SPDK_CONFIG_IDXD 1 00:08:10.044 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:10.044 #undef SPDK_CONFIG_IPSEC_MB 00:08:10.044 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:10.044 #define SPDK_CONFIG_ISAL 1 00:08:10.044 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:10.044 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:10.044 #define SPDK_CONFIG_LIBDIR 00:08:10.044 #undef SPDK_CONFIG_LTO 00:08:10.044 #define SPDK_CONFIG_MAX_LCORES 128 00:08:10.044 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:10.044 #define SPDK_CONFIG_NVME_CUSE 1 00:08:10.044 #undef SPDK_CONFIG_OCF 00:08:10.044 #define SPDK_CONFIG_OCF_PATH 00:08:10.044 #define SPDK_CONFIG_OPENSSL_PATH 00:08:10.044 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:10.044 #define SPDK_CONFIG_PGO_DIR 00:08:10.044 #undef SPDK_CONFIG_PGO_USE 00:08:10.044 #define SPDK_CONFIG_PREFIX /usr/local 00:08:10.044 #undef SPDK_CONFIG_RAID5F 00:08:10.044 #undef SPDK_CONFIG_RBD 00:08:10.044 #define SPDK_CONFIG_RDMA 1 00:08:10.044 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:10.044 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:10.044 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:10.044 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:10.044 #undef SPDK_CONFIG_SHARED 00:08:10.044 #undef SPDK_CONFIG_SMA 00:08:10.044 #define SPDK_CONFIG_TESTS 1 00:08:10.044 #undef SPDK_CONFIG_TSAN 00:08:10.044 #define SPDK_CONFIG_UBLK 1 00:08:10.044 #define SPDK_CONFIG_UBSAN 1 00:08:10.044 #define SPDK_CONFIG_UNIT_TESTS 1 00:08:10.044 #undef SPDK_CONFIG_URING 00:08:10.044 #define SPDK_CONFIG_URING_PATH 00:08:10.044 #undef SPDK_CONFIG_URING_ZNS 00:08:10.044 #undef SPDK_CONFIG_USDT 00:08:10.044 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:10.044 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:10.044 #undef SPDK_CONFIG_VFIO_USER 00:08:10.044 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:10.044 #define SPDK_CONFIG_VHOST 1 00:08:10.044 #define SPDK_CONFIG_VIRTIO 1 00:08:10.044 #undef SPDK_CONFIG_VTUNE 00:08:10.044 #define SPDK_CONFIG_VTUNE_DIR 00:08:10.044 #define SPDK_CONFIG_WERROR 1 00:08:10.044 #define SPDK_CONFIG_WPDK_DIR 00:08:10.044 #undef SPDK_CONFIG_XNVME 00:08:10.044 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:10.044 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:10.044 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.044 +++ shopt -s extglob 00:08:10.044 +++ [[ -e /bin/wpdk_common.sh ]] 00:08:10.044 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.044 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.044 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:10.044 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:10.044 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:10.045 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:10.045 ++++ export PATH 00:08:10.045 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:10.045 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:10.045 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:10.045 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:10.045 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:10.045 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:10.045 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:10.045 +++ TEST_TAG=N/A 00:08:10.045 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:10.045 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:08:10.045 ++++ uname -s 00:08:10.045 +++ PM_OS=Linux 00:08:10.045 +++ MONITOR_RESOURCES_SUDO=() 00:08:10.045 +++ declare -A MONITOR_RESOURCES_SUDO 00:08:10.045 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:10.045 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:10.045 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:10.045 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:10.045 +++ SUDO[0]= 00:08:10.045 +++ SUDO[1]='sudo -E' 00:08:10.045 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:10.045 +++ [[ Linux == FreeBSD ]] 00:08:10.045 +++ [[ Linux == Linux ]] 00:08:10.045 +++ [[ QEMU != QEMU ]] 00:08:10.045 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:08:10.045 ++ : 0 00:08:10.045 ++ export RUN_NIGHTLY 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_RUN_VALGRIND 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_TEST_UNITTEST 00:08:10.045 ++ : 00:08:10.045 ++ export SPDK_TEST_AUTOBUILD 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_RELEASE_BUILD 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ISAL 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ISCSI 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ISCSI_INITIATOR 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_TEST_NVME 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_PMR 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_BP 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_CLI 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_CUSE 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_FDP 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVMF 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VFIOUSER 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VFIOUSER_QEMU 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_FUZZER 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_FUZZER_SHORT 00:08:10.045 ++ : rdma 00:08:10.045 ++ export SPDK_TEST_NVMF_TRANSPORT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_RBD 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VHOST 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_TEST_BLOCKDEV 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_RAID 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_IOAT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_BLOBFS 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VHOST_INIT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_LVOL 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VBDEV_COMPRESS 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_RUN_ASAN 00:08:10.045 ++ : 1 00:08:10.045 ++ export SPDK_RUN_UBSAN 00:08:10.045 ++ : 00:08:10.045 ++ export SPDK_RUN_EXTERNAL_DPDK 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_RUN_NON_ROOT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_CRYPTO 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_FTL 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_OCF 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_VMD 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_OPAL 00:08:10.045 ++ : 00:08:10.045 ++ export SPDK_TEST_NATIVE_DPDK 00:08:10.045 ++ : true 00:08:10.045 ++ export SPDK_AUTOTEST_X 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_URING 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_USDT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_USE_IGB_UIO 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_SCHEDULER 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_SCANBUILD 00:08:10.045 ++ : 00:08:10.045 ++ export SPDK_TEST_NVMF_NICS 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_SMA 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_DAOS 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_XNVME 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ACCEL 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ACCEL_DSA 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_ACCEL_IAA 00:08:10.045 ++ : 00:08:10.045 ++ export SPDK_TEST_FUZZER_TARGET 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVMF_MDNS 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_JSONRPC_GO_CLIENT 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_SETUP 00:08:10.045 ++ : 0 00:08:10.045 ++ export SPDK_TEST_NVME_INTERRUPT 00:08:10.045 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:10.045 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:10.045 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:10.045 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:10.045 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:10.045 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:10.045 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:10.045 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:10.045 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:10.045 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:08:10.045 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:10.045 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:10.045 ++ export PYTHONDONTWRITEBYTECODE=1 00:08:10.045 ++ PYTHONDONTWRITEBYTECODE=1 00:08:10.045 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:10.046 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:10.046 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:10.046 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:10.046 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:08:10.046 ++ rm -rf /var/tmp/asan_suppression_file 00:08:10.046 ++ cat 00:08:10.046 ++ echo leak:libfuse3.so 00:08:10.046 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:10.046 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:10.046 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:10.046 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:10.046 ++ '[' -z /var/spdk/dependencies ']' 00:08:10.046 ++ export DEPENDENCY_DIR 00:08:10.046 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:10.046 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:10.046 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:10.046 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:10.046 ++ export QEMU_BIN= 00:08:10.046 ++ QEMU_BIN= 00:08:10.046 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:08:10.046 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:08:10.046 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:10.046 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:10.046 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.046 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.046 ++ _LCOV_MAIN=0 00:08:10.046 ++ _LCOV_LLVM=1 00:08:10.046 ++ _LCOV= 00:08:10.046 ++ [[ '' == *clang* ]] 00:08:10.046 ++ [[ 0 -eq 1 ]] 00:08:10.046 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:10.046 ++ _lcov_opt[_LCOV_MAIN]= 00:08:10.046 ++ lcov_opt= 00:08:10.046 ++ '[' 0 -eq 0 ']' 00:08:10.046 ++ export valgrind= 00:08:10.046 ++ valgrind= 00:08:10.046 +++ uname -s 00:08:10.046 ++ '[' Linux = Linux ']' 00:08:10.046 ++ HUGEMEM=4096 00:08:10.046 ++ export CLEAR_HUGE=yes 00:08:10.046 ++ CLEAR_HUGE=yes 00:08:10.046 ++ MAKE=make 00:08:10.046 +++ nproc 00:08:10.046 ++ MAKEFLAGS=-j10 00:08:10.046 ++ export HUGEMEM=4096 00:08:10.046 ++ HUGEMEM=4096 00:08:10.046 ++ NO_HUGE=() 00:08:10.046 ++ TEST_MODE= 00:08:10.046 ++ [[ -z '' ]] 00:08:10.046 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:08:10.046 ++ exec 00:08:10.046 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:08:10.046 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:08:10.046 ++ set_test_storage 2147483648 00:08:10.046 ++ [[ -v testdir ]] 00:08:10.046 ++ local requested_size=2147483648 00:08:10.046 ++ local mount target_dir 00:08:10.046 ++ local -A mounts fss sizes avails uses 00:08:10.046 ++ local source fs size avail mount use 00:08:10.046 ++ local storage_fallback storage_candidates 00:08:10.046 +++ mktemp -udt spdk.XXXXXX 00:08:10.046 ++ storage_fallback=/tmp/spdk.kY6KTN 00:08:10.046 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:10.046 ++ [[ -n '' ]] 00:08:10.046 ++ [[ -n '' ]] 00:08:10.046 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.kY6KTN/tests/unit /tmp/spdk.kY6KTN 00:08:10.046 ++ requested_size=2214592512 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 +++ df -T 00:08:10.046 +++ grep -v Filesystem 00:08:10.046 ++ mounts["$mount"]=tmpfs 00:08:10.046 ++ fss["$mount"]=tmpfs 00:08:10.046 ++ avails["$mount"]=1252954112 00:08:10.046 ++ sizes["$mount"]=1254023168 00:08:10.046 ++ uses["$mount"]=1069056 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=/dev/vda1 00:08:10.046 ++ fss["$mount"]=ext4 00:08:10.046 ++ avails["$mount"]=9698074624 00:08:10.046 ++ sizes["$mount"]=19681529856 00:08:10.046 ++ uses["$mount"]=9966678016 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=tmpfs 00:08:10.046 ++ fss["$mount"]=tmpfs 00:08:10.046 ++ avails["$mount"]=6270111744 00:08:10.046 ++ sizes["$mount"]=6270111744 00:08:10.046 ++ uses["$mount"]=0 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=tmpfs 00:08:10.046 ++ fss["$mount"]=tmpfs 00:08:10.046 ++ avails["$mount"]=5242880 00:08:10.046 ++ sizes["$mount"]=5242880 00:08:10.046 ++ uses["$mount"]=0 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=/dev/vda16 00:08:10.046 ++ fss["$mount"]=ext4 00:08:10.046 ++ avails["$mount"]=777306112 00:08:10.046 ++ sizes["$mount"]=923156480 00:08:10.046 ++ uses["$mount"]=81207296 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=/dev/vda15 00:08:10.046 ++ fss["$mount"]=vfat 00:08:10.046 ++ avails["$mount"]=103000064 00:08:10.046 ++ sizes["$mount"]=109395968 00:08:10.046 ++ uses["$mount"]=6395904 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=tmpfs 00:08:10.046 ++ fss["$mount"]=tmpfs 00:08:10.046 ++ avails["$mount"]=1254006784 00:08:10.046 ++ sizes["$mount"]=1254019072 00:08:10.046 ++ uses["$mount"]=12288 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:08:10.046 ++ fss["$mount"]=fuse.sshfs 00:08:10.046 ++ avails["$mount"]=93433081856 00:08:10.046 ++ sizes["$mount"]=105088212992 00:08:10.046 ++ uses["$mount"]=6269698048 00:08:10.046 ++ read -r source fs size use avail _ mount 00:08:10.046 ++ printf '* Looking for test storage...\n' 00:08:10.046 * Looking for test storage... 00:08:10.046 ++ local target_space new_size 00:08:10.046 ++ for target_dir in "${storage_candidates[@]}" 00:08:10.046 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:08:10.046 +++ awk '$1 !~ /Filesystem/{print $6}' 00:08:10.046 ++ mount=/ 00:08:10.046 ++ target_space=9698074624 00:08:10.046 ++ (( target_space == 0 || target_space < requested_size )) 00:08:10.046 ++ (( target_space >= requested_size )) 00:08:10.046 ++ [[ ext4 == tmpfs ]] 00:08:10.046 ++ [[ ext4 == ramfs ]] 00:08:10.046 ++ [[ / == / ]] 00:08:10.046 ++ new_size=12181270528 00:08:10.046 ++ (( new_size * 100 / sizes[/] > 95 )) 00:08:10.046 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:08:10.046 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:08:10.046 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:08:10.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:08:10.046 ++ return 0 00:08:10.046 ++ set -o errtrace 00:08:10.046 ++ shopt -s extdebug 00:08:10.046 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:08:10.046 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@1685 -- # true 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@1687 -- # xtrace_fd 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@29 -- # exec 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:10.046 07:18:12 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:10.047 07:18:12 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:10.047 07:18:12 unittest -- common/autotest_common.sh@18 -- # set -x 00:08:10.047 07:18:12 unittest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.047 07:18:12 unittest -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.047 07:18:12 unittest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.047 07:18:13 unittest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.047 07:18:13 unittest -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.047 07:18:13 unittest -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.047 07:18:13 unittest -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.047 07:18:13 unittest -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.047 07:18:13 unittest -- scripts/common.sh@344 -- # case "$op" in 00:08:10.047 07:18:13 unittest -- scripts/common.sh@345 -- # : 1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.047 07:18:13 unittest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.047 07:18:13 unittest -- scripts/common.sh@365 -- # decimal 1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@353 -- # local d=1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.047 07:18:13 unittest -- scripts/common.sh@355 -- # echo 1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.047 07:18:13 unittest -- scripts/common.sh@366 -- # decimal 2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@353 -- # local d=2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.047 07:18:13 unittest -- scripts/common.sh@355 -- # echo 2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.047 07:18:13 unittest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.047 07:18:13 unittest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.047 07:18:13 unittest -- scripts/common.sh@368 -- # return 0 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.047 --rc genhtml_branch_coverage=1 00:08:10.047 --rc genhtml_function_coverage=1 00:08:10.047 --rc genhtml_legend=1 00:08:10.047 --rc geninfo_all_blocks=1 00:08:10.047 --rc geninfo_unexecuted_blocks=1 00:08:10.047 00:08:10.047 ' 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.047 --rc genhtml_branch_coverage=1 00:08:10.047 --rc genhtml_function_coverage=1 00:08:10.047 --rc genhtml_legend=1 00:08:10.047 --rc geninfo_all_blocks=1 00:08:10.047 --rc geninfo_unexecuted_blocks=1 00:08:10.047 00:08:10.047 ' 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.047 --rc genhtml_branch_coverage=1 00:08:10.047 --rc genhtml_function_coverage=1 00:08:10.047 --rc genhtml_legend=1 00:08:10.047 --rc geninfo_all_blocks=1 00:08:10.047 --rc geninfo_unexecuted_blocks=1 00:08:10.047 00:08:10.047 ' 00:08:10.047 07:18:13 unittest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.047 --rc genhtml_branch_coverage=1 00:08:10.047 --rc genhtml_function_coverage=1 00:08:10.047 --rc genhtml_legend=1 00:08:10.047 --rc geninfo_all_blocks=1 00:08:10.047 --rc geninfo_unexecuted_blocks=1 00:08:10.047 00:08:10.047 ' 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@159 -- # '[' 0 -eq 1 ']' 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@166 -- # '[' -z x ']' 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@173 -- # '[' 0 -eq 1 ']' 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@182 -- # [[ y == y ]] 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@183 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@184 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:10.047 07:18:13 unittest -- unit/unittest.sh@186 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:08:16.619 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:16.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:24.375 07:19:18 unittest -- unit/unittest.sh@190 -- # uname -m 00:09:24.375 07:19:18 unittest -- unit/unittest.sh@190 -- # '[' x86_64 = aarch64 ']' 00:09:24.375 07:19:18 unittest -- unit/unittest.sh@194 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:24.375 07:19:18 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.375 07:19:18 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.375 07:19:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.375 ************************************ 00:09:24.375 START TEST unittest_pci_event 00:09:24.375 ************************************ 00:09:24.375 07:19:18 unittest.unittest_pci_event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:24.375 00:09:24.375 00:09:24.375 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.375 http://cunit.sourceforge.net/ 00:09:24.375 00:09:24.375 00:09:24.375 Suite: pci_event 00:09:24.375 Test: test_pci_parse_event ...[2024-11-20 07:19:18.615314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:09:24.375 [2024-11-20 07:19:18.616314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:09:24.375 passed 00:09:24.375 00:09:24.375 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.375 suites 1 1 n/a 0 0 00:09:24.375 tests 1 1 1 0 0 00:09:24.375 asserts 15 15 15 0 n/a 00:09:24.375 00:09:24.375 Elapsed time = 0.002 seconds 00:09:24.375 00:09:24.375 real 0m0.073s 00:09:24.375 user 0m0.027s 00:09:24.375 sys 0m0.035s 00:09:24.375 07:19:18 unittest.unittest_pci_event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.375 07:19:18 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:09:24.375 ************************************ 00:09:24.375 END TEST unittest_pci_event 00:09:24.375 ************************************ 00:09:24.375 07:19:18 unittest -- unit/unittest.sh@195 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.376 ************************************ 00:09:24.376 START TEST unittest_include 00:09:24.376 ************************************ 00:09:24.376 07:19:18 unittest.unittest_include -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:24.376 00:09:24.376 00:09:24.376 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.376 http://cunit.sourceforge.net/ 00:09:24.376 00:09:24.376 00:09:24.376 Suite: histogram 00:09:24.376 Test: histogram_test ...passed 00:09:24.376 Test: histogram_merge ...passed 00:09:24.376 00:09:24.376 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.376 suites 1 1 n/a 0 0 00:09:24.376 tests 2 2 2 0 0 00:09:24.376 asserts 50 50 50 0 n/a 00:09:24.376 00:09:24.376 Elapsed time = 0.008 seconds 00:09:24.376 00:09:24.376 real 0m0.048s 00:09:24.376 user 0m0.026s 00:09:24.376 sys 0m0.022s 00:09:24.376 07:19:18 unittest.unittest_include -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.376 07:19:18 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:09:24.376 ************************************ 00:09:24.376 END TEST unittest_include 00:09:24.376 ************************************ 00:09:24.376 07:19:18 unittest -- unit/unittest.sh@196 -- # run_test unittest_bdev unittest_bdev 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.376 07:19:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.376 ************************************ 00:09:24.376 START TEST unittest_bdev 00:09:24.376 ************************************ 00:09:24.376 07:19:18 unittest.unittest_bdev -- common/autotest_common.sh@1129 -- # unittest_bdev 00:09:24.376 07:19:18 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:09:24.376 00:09:24.376 00:09:24.376 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.376 http://cunit.sourceforge.net/ 00:09:24.376 00:09:24.376 00:09:24.376 Suite: bdev 00:09:24.376 Test: bytes_to_blocks_test ...passed 00:09:24.376 Test: num_blocks_test ...passed 00:09:24.376 Test: io_valid_test ...passed 00:09:24.376 Test: open_write_test ...[2024-11-20 07:19:18.850425] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:09:24.376 [2024-11-20 07:19:18.850673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:09:24.376 [2024-11-20 07:19:18.850781] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:09:24.376 passed 00:09:24.376 Test: claim_test ...passed 00:09:24.376 Test: alias_add_del_test ...[2024-11-20 07:19:18.903039] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:09:24.376 [2024-11-20 07:19:18.903127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4730:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:09:24.376 [2024-11-20 07:19:18.903184] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:09:24.376 passed 00:09:24.376 Test: get_device_stat_test ...passed 00:09:24.376 Test: bdev_io_types_test ...passed 00:09:24.376 Test: bdev_io_wait_test ...passed 00:09:24.376 Test: bdev_io_spans_split_test ...passed 00:09:24.376 Test: bdev_io_boundary_split_test ...passed 00:09:24.376 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-20 07:19:19.006958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3285:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:09:24.376 passed 00:09:24.376 Test: bdev_io_mix_split_test ...passed 00:09:24.376 Test: bdev_io_split_with_io_wait ...passed 00:09:24.376 Test: bdev_io_write_unit_split_test ...[2024-11-20 07:19:19.084786] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:24.376 [2024-11-20 07:19:19.084876] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:24.376 [2024-11-20 07:19:19.084906] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:09:24.376 [2024-11-20 07:19:19.084953] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:09:24.376 passed 00:09:24.376 Test: bdev_io_alignment_with_boundary ...passed 00:09:24.376 Test: bdev_io_alignment ...passed 00:09:24.376 Test: bdev_histograms ...passed 00:09:24.376 Test: bdev_write_zeroes ...passed 00:09:24.376 Test: bdev_compare_and_write ...passed 00:09:24.376 Test: bdev_compare ...passed 00:09:24.376 Test: bdev_compare_emulated ...passed 00:09:24.376 Test: bdev_zcopy_write ...passed 00:09:24.376 Test: bdev_zcopy_read ...passed 00:09:24.376 Test: bdev_open_while_hotremove ...passed 00:09:24.376 Test: bdev_close_while_hotremove ...passed 00:09:24.376 Test: bdev_open_ext_test ...passed 00:09:24.376 Test: bdev_open_ext_unregister ...[2024-11-20 07:19:19.391973] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8305:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:24.376 [2024-11-20 07:19:19.392149] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8305:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:24.376 passed 00:09:24.376 Test: bdev_set_io_timeout ...passed 00:09:24.376 Test: bdev_set_qd_sampling ...passed 00:09:24.376 Test: lba_range_overlap ...passed 00:09:24.376 Test: lock_lba_range_check_ranges ...passed 00:09:24.376 Test: lock_lba_range_with_io_outstanding ...passed 00:09:24.376 Test: lock_lba_range_overlapped ...passed 00:09:24.376 Test: bdev_quiesce ...[2024-11-20 07:19:19.526214] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10284:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:09:24.376 passed 00:09:24.376 Test: bdev_io_abort ...passed 00:09:24.376 Test: bdev_unmap ...passed 00:09:24.376 Test: bdev_write_zeroes_split_test ...passed 00:09:24.376 Test: bdev_set_options_test ...passed 00:09:24.376 Test: bdev_get_memory_domains ...passed 00:09:24.376 Test: bdev_io_ext ...[2024-11-20 07:19:19.621329] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 503:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:09:24.376 passed 00:09:24.376 Test: bdev_io_ext_no_opts ...passed 00:09:24.376 Test: bdev_io_ext_invalid_opts ...passed 00:09:24.376 Test: bdev_io_ext_split ...passed 00:09:24.376 Test: bdev_io_ext_bounce_buffer ...passed 00:09:24.376 Test: bdev_register_uuid_alias ...[2024-11-20 07:19:19.755489] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 1e6d34a6-816d-47d5-b119-ff67a81bd85c already exists 00:09:24.376 [2024-11-20 07:19:19.755582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:1e6d34a6-816d-47d5-b119-ff67a81bd85c alias for bdev bdev0 00:09:24.376 passed 00:09:24.376 Test: bdev_unregister_by_name ...[2024-11-20 07:19:19.775634] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8095:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:09:24.376 passed 00:09:24.376 Test: for_each_bdev_test ...[2024-11-20 07:19:19.775710] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8103:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:09:24.376 passed 00:09:24.376 Test: bdev_seek_test ...passed 00:09:24.376 Test: bdev_copy ...passed 00:09:24.376 Test: bdev_copy_split_test ...passed 00:09:24.376 Test: examine_locks ...passed 00:09:24.376 Test: claim_v2_rwo ...[2024-11-20 07:19:19.855302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8839:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855397] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855411] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855489] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8834:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:09:24.376 passed 00:09:24.376 Test: claim_v2_rom ...[2024-11-20 07:19:19.855629] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855650] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855663] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8877:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:09:24.376 passed 00:09:24.376 Test: claim_v2_rwm ...[2024-11-20 07:19:19.855736] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:24.376 [2024-11-20 07:19:19.855834] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8907:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:24.376 [2024-11-20 07:19:19.855864] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855882] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:24.376 passed 00:09:24.376 Test: claim_v2_existing_writer ...[2024-11-20 07:19:19.855893] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:24.376 [2024-11-20 07:19:19.855923] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.855937] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8927:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.855983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8907:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:24.377 [2024-11-20 07:19:19.856102] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:24.377 [2024-11-20 07:19:19.856120] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:24.377 passed 00:09:24.377 Test: claim_v2_existing_v1 ...[2024-11-20 07:19:19.856217] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.856232] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.856244] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:24.377 passed 00:09:24.377 Test: claim_v1_existing_v2 ...[2024-11-20 07:19:19.856328] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.856350] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:24.377 [2024-11-20 07:19:19.856371] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:24.377 passed 00:09:24.377 Test: examine_claimed ...[2024-11-20 07:19:19.862223] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:09:24.377 passed 00:09:24.377 Test: examine_claimed_manual ...[2024-11-20 07:19:19.888082] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:09:24.377 passed 00:09:24.377 Test: get_numa_id ...passed 00:09:24.377 Test: get_device_stat_with_reset ...passed 00:09:24.377 00:09:24.377 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.377 suites 1 1 n/a 0 0 00:09:24.377 tests 62 62 62 0 0 00:09:24.377 asserts 4705 4705 4705 0 n/a 00:09:24.377 00:09:24.377 Elapsed time = 1.118 seconds 00:09:24.377 07:19:19 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:09:24.377 00:09:24.377 00:09:24.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.377 http://cunit.sourceforge.net/ 00:09:24.377 00:09:24.377 00:09:24.377 Suite: nvme 00:09:24.377 Test: test_create_ctrlr ...passed 00:09:24.377 Test: test_reset_ctrlr ...[2024-11-20 07:19:19.983936] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:09:24.377 Test: test_failover_ctrlr ...passed 00:09:24.377 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-20 07:19:19.986022] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.986188] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.986337] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_pending_reset ...[2024-11-20 07:19:19.987932] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.988166] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_attach_ctrlr ...[2024-11-20 07:19:19.988972] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:09:24.377 passed 00:09:24.377 Test: test_aer_cb ...passed 00:09:24.377 Test: test_submit_nvme_cmd ...passed 00:09:24.377 Test: test_add_remove_trid ...passed 00:09:24.377 Test: test_abort ...[2024-11-20 07:19:19.991404] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7953:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:09:24.377 passed 00:09:24.377 Test: test_get_io_qpair ...passed 00:09:24.377 Test: test_bdev_unregister ...passed 00:09:24.377 Test: test_compare_ns ...passed 00:09:24.377 Test: test_init_ana_log_page ...passed 00:09:24.377 Test: test_get_memory_domains ...passed 00:09:24.377 Test: test_reconnect_qpair ...[2024-11-20 07:19:19.993341] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 17] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_create_bdev_ctrlr ...[2024-11-20 07:19:19.993750] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5755:bdev_nvme_check_multipath: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 18] cntlid 18 are duplicated. 00:09:24.377 passed 00:09:24.377 Test: test_add_multi_ns_to_bdev ...[2024-11-20 07:19:19.994605] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4912:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:09:24.377 passed 00:09:24.377 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:09:24.377 Test: test_admin_path ...passed 00:09:24.377 Test: test_reset_bdev_ctrlr ...[2024-11-20 07:19:19.997722] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.997883] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.997965] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.998245] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.998468] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.998557] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.998814] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.998893] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.999056] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.999086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.999186] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:19.999211] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_find_io_path ...passed 00:09:24.377 Test: test_retry_io_if_ana_state_is_updating ...passed 00:09:24.377 Test: test_retry_io_for_io_path_error ...passed 00:09:24.377 Test: test_retry_io_count ...passed 00:09:24.377 Test: test_concurrent_read_ana_log_page ...passed 00:09:24.377 Test: test_retry_io_for_ana_error ...passed 00:09:24.377 Test: test_check_io_error_resiliency_params ...[2024-11-20 07:19:20.001518] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6595:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:09:24.377 [2024-11-20 07:19:20.001550] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6599:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:24.377 [2024-11-20 07:19:20.001565] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6608:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:24.377 [2024-11-20 07:19:20.001575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6611:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:09:24.377 [2024-11-20 07:19:20.001587] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6623:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:24.377 [2024-11-20 07:19:20.001607] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6623:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:24.377 [2024-11-20 07:19:20.001620] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6603:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:09:24.377 passed 00:09:24.377 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-20 07:19:20.001628] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6618:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:09:24.377 [2024-11-20 07:19:20.001650] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6615:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:09:24.377 passed 00:09:24.377 Test: test_reconnect_ctrlr ...[2024-11-20 07:19:20.002163] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:20.002229] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:20.002397] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:20.002456] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 [2024-11-20 07:19:20.002497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 passed 00:09:24.377 Test: test_retry_failover_ctrlr ...[2024-11-20 07:19:20.002738] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.377 passed 00:09:24.378 Test: test_fail_path ...[2024-11-20 07:19:20.003127] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed. 00:09:24.378 [2024-11-20 07:19:20.003230] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed. 00:09:24.378 [2024-11-20 07:19:20.003295] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed. 00:09:24.378 [2024-11-20 07:19:20.003353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed. 00:09:24.378 [2024-11-20 07:19:20.003424] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed. 00:09:24.378 passed 00:09:24.378 Test: test_nvme_ns_cmp ...passed 00:09:24.378 Test: test_ana_transition ...passed 00:09:24.378 Test: test_set_preferred_path ...passed 00:09:24.378 Test: test_find_next_io_path ...passed 00:09:24.378 Test: test_find_io_path_min_qd ...passed 00:09:24.378 Test: test_disable_auto_failback ...[2024-11-20 07:19:20.004605] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 45] Resetting controller failed. 00:09:24.378 passed 00:09:24.378 Test: test_set_multipath_policy ...passed 00:09:24.378 Test: test_uuid_generation ...passed 00:09:24.378 Test: test_retry_io_to_same_path ...passed 00:09:24.378 Test: test_race_between_reset_and_disconnected ...passed 00:09:24.378 Test: test_ctrlr_op_rpc ...passed 00:09:24.378 Test: test_bdev_ctrlr_op_rpc ...passed 00:09:24.378 Test: test_disable_enable_ctrlr ...[2024-11-20 07:19:20.007173] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.378 [2024-11-20 07:19:20.007252] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed. 00:09:24.378 passed 00:09:24.378 Test: test_delete_ctrlr_done ...passed 00:09:24.378 Test: test_ns_remove_during_reset ...passed 00:09:24.378 Test: test_io_path_is_current ...passed 00:09:24.378 Test: test_bdev_reset_abort_io ...passed 00:09:24.378 Test: test_race_between_clear_pending_resets_and_reset_ctrlr_complete ...passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 1 1 n/a 0 0 00:09:24.378 tests 51 51 51 0 0 00:09:24.378 asserts 4017 4017 4017 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.026 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 Test Options 00:09:24.378 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:09:24.378 00:09:24.378 Suite: raid 00:09:24.378 Test: test_create_raid ...passed 00:09:24.378 Test: test_create_raid_superblock ...passed 00:09:24.378 Test: test_delete_raid ...passed 00:09:24.378 Test: test_create_raid_invalid_args ...[2024-11-20 07:19:20.051791] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1521:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:09:24.378 [2024-11-20 07:19:20.052294] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1515:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:09:24.378 [2024-11-20 07:19:20.052989] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1505:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:09:24.378 [2024-11-20 07:19:20.053197] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:24.378 [2024-11-20 07:19:20.053238] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:24.378 [2024-11-20 07:19:20.054119] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:24.378 [2024-11-20 07:19:20.054186] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:24.378 passed 00:09:24.378 Test: test_delete_raid_invalid_args ...passed 00:09:24.378 Test: test_io_channel ...passed 00:09:24.378 Test: test_reset_io ...passed 00:09:24.378 Test: test_multi_raid ...passed 00:09:24.378 Test: test_io_type_supported ...passed 00:09:24.378 Test: test_raid_json_dump_info ...passed 00:09:24.378 Test: test_context_size ...passed 00:09:24.378 Test: test_raid_level_conversions ...passed 00:09:24.378 Test: test_raid_io_split ...passed 00:09:24.378 Test: test_raid_process ...passed 00:09:24.378 Test: test_raid_process_with_qos ...passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 1 1 n/a 0 0 00:09:24.378 tests 15 15 15 0 0 00:09:24.378 asserts 6602 6602 6602 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.028 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 00:09:24.378 Suite: raid_sb 00:09:24.378 Test: test_raid_bdev_write_superblock ...passed 00:09:24.378 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:24.378 Test: test_raid_bdev_parse_superblock ...[2024-11-20 07:19:20.136228] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:24.378 passed 00:09:24.378 Suite: raid_sb_md 00:09:24.378 Test: test_raid_bdev_write_superblock ...passed 00:09:24.378 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:24.378 Test: test_raid_bdev_parse_superblock ...[2024-11-20 07:19:20.136764] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:24.378 passed 00:09:24.378 Suite: raid_sb_md_interleaved 00:09:24.378 Test: test_raid_bdev_write_superblock ...passed 00:09:24.378 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:24.378 Test: test_raid_bdev_parse_superblock ...[2024-11-20 07:19:20.137330] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:24.378 passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 3 3 n/a 0 0 00:09:24.378 tests 9 9 9 0 0 00:09:24.378 asserts 139 139 139 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.002 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 00:09:24.378 Suite: concat 00:09:24.378 Test: test_concat_start ...passed 00:09:24.378 Test: test_concat_rw ...passed 00:09:24.378 Test: test_concat_null_payload ...passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 1 1 n/a 0 0 00:09:24.378 tests 3 3 3 0 0 00:09:24.378 asserts 8460 8460 8460 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.009 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 00:09:24.378 Suite: raid0 00:09:24.378 Test: test_write_io ...passed 00:09:24.378 Test: test_read_io ...passed 00:09:24.378 Test: test_unmap_io ...passed 00:09:24.378 Test: test_io_failure ...passed 00:09:24.378 Suite: raid0_dif 00:09:24.378 Test: test_write_io ...passed 00:09:24.378 Test: test_read_io ...passed 00:09:24.378 Test: test_unmap_io ...passed 00:09:24.378 Test: test_io_failure ...passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 2 2 n/a 0 0 00:09:24.378 tests 8 8 8 0 0 00:09:24.378 asserts 368291 368291 368291 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.151 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 00:09:24.378 Suite: raid1 00:09:24.378 Test: test_raid1_start ...passed 00:09:24.378 Test: test_raid1_read_balancing ...passed 00:09:24.378 Test: test_raid1_write_error ...passed 00:09:24.378 Test: test_raid1_read_error ...passed 00:09:24.378 00:09:24.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.378 suites 1 1 n/a 0 0 00:09:24.378 tests 4 4 4 0 0 00:09:24.378 asserts 4374 4374 4374 0 n/a 00:09:24.378 00:09:24.378 Elapsed time = 0.007 seconds 00:09:24.378 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:09:24.378 00:09:24.378 00:09:24.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.378 http://cunit.sourceforge.net/ 00:09:24.378 00:09:24.378 00:09:24.378 Suite: zone 00:09:24.378 Test: test_zone_get_operation ...passed 00:09:24.378 Test: test_bdev_zone_get_info ...passed 00:09:24.378 Test: test_bdev_zone_management ...passed 00:09:24.378 Test: test_bdev_zone_append ...passed 00:09:24.378 Test: test_bdev_zone_append_with_md ...passed 00:09:24.378 Test: test_bdev_zone_appendv ...passed 00:09:24.378 Test: test_bdev_zone_appendv_with_md ...passed 00:09:24.379 Test: test_bdev_io_get_append_location ...passed 00:09:24.379 00:09:24.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.379 suites 1 1 n/a 0 0 00:09:24.379 tests 8 8 8 0 0 00:09:24.379 asserts 94 94 94 0 n/a 00:09:24.379 00:09:24.379 Elapsed time = 0.000 seconds 00:09:24.379 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:09:24.379 00:09:24.379 00:09:24.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.379 http://cunit.sourceforge.net/ 00:09:24.379 00:09:24.379 00:09:24.379 Suite: gpt_parse 00:09:24.379 Test: test_parse_mbr_and_primary ...[2024-11-20 07:19:20.527254] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:24.379 [2024-11-20 07:19:20.527557] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:24.379 passed 00:09:24.379 Test: test_parse_secondary ...passed 00:09:24.379 Test: test_check_mbr ...passed 00:09:24.379 Test: test_read_header ...passed 00:09:24.379 Test: test_read_partitions ...[2024-11-20 07:19:20.527669] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:24.379 [2024-11-20 07:19:20.527764] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:24.379 [2024-11-20 07:19:20.527826] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:24.379 [2024-11-20 07:19:20.527870] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:24.379 [2024-11-20 07:19:20.528627] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:24.379 [2024-11-20 07:19:20.528656] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:24.379 [2024-11-20 07:19:20.528774] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:24.379 [2024-11-20 07:19:20.528802] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:24.379 [2024-11-20 07:19:20.529516] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:24.379 [2024-11-20 07:19:20.529559] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:24.379 [2024-11-20 07:19:20.529738] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:09:24.379 [2024-11-20 07:19:20.529777] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:09:24.379 [2024-11-20 07:19:20.529828] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:09:24.379 [2024-11-20 07:19:20.529881] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:09:24.379 [2024-11-20 07:19:20.529923] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:09:24.379 [2024-11-20 07:19:20.529944] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:09:24.379 [2024-11-20 07:19:20.530055] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:09:24.379 [2024-11-20 07:19:20.530082] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:09:24.379 [2024-11-20 07:19:20.530118] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:09:24.379 [2024-11-20 07:19:20.530145] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:09:24.379 [2024-11-20 07:19:20.530502] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:09:24.379 passed 00:09:24.379 00:09:24.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.379 suites 1 1 n/a 0 0 00:09:24.379 tests 5 5 5 0 0 00:09:24.379 asserts 33 33 33 0 n/a 00:09:24.379 00:09:24.379 Elapsed time = 0.004 seconds 00:09:24.379 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:09:24.379 00:09:24.379 00:09:24.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.379 http://cunit.sourceforge.net/ 00:09:24.379 00:09:24.379 00:09:24.379 Suite: bdev_part 00:09:24.379 Test: part_test ...passed 00:09:24.379 Test: part_free_test ...[2024-11-20 07:19:20.572434] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name c43e5438-cb4c-5dff-ba76-29cfaf105dfc already exists 00:09:24.379 [2024-11-20 07:19:20.572720] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:c43e5438-cb4c-5dff-ba76-29cfaf105dfc alias for bdev test1 00:09:24.379 passed 00:09:24.379 Test: part_get_io_channel_test ...passed 00:09:24.379 Test: part_construct_ext ...passed 00:09:24.379 00:09:24.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.379 suites 1 1 n/a 0 0 00:09:24.379 tests 4 4 4 0 0 00:09:24.379 asserts 48 48 48 0 n/a 00:09:24.379 00:09:24.379 Elapsed time = 0.040 seconds 00:09:24.379 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:09:24.379 00:09:24.379 00:09:24.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.379 http://cunit.sourceforge.net/ 00:09:24.379 00:09:24.379 00:09:24.379 Suite: scsi_nvme_suite 00:09:24.379 Test: scsi_nvme_translate_test ...passed 00:09:24.379 00:09:24.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.379 suites 1 1 n/a 0 0 00:09:24.379 tests 1 1 1 0 0 00:09:24.379 asserts 104 104 104 0 n/a 00:09:24.379 00:09:24.379 Elapsed time = 0.000 seconds 00:09:24.379 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:09:24.379 00:09:24.379 00:09:24.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.379 http://cunit.sourceforge.net/ 00:09:24.379 00:09:24.379 00:09:24.379 Suite: lvol 00:09:24.379 Test: ut_lvs_init ...[2024-11-20 07:19:20.702668] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:09:24.379 [2024-11-20 07:19:20.703014] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_init ...passed 00:09:24.379 Test: ut_lvol_snapshot ...passed 00:09:24.379 Test: ut_lvol_clone ...passed 00:09:24.379 Test: ut_lvs_destroy ...passed 00:09:24.379 Test: ut_lvs_unload ...passed 00:09:24.379 Test: ut_lvol_resize ...[2024-11-20 07:19:20.704734] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_set_read_only ...passed 00:09:24.379 Test: ut_lvol_hotremove ...passed 00:09:24.379 Test: ut_vbdev_lvol_get_io_channel ...passed 00:09:24.379 Test: ut_vbdev_lvol_io_type_supported ...passed 00:09:24.379 Test: ut_lvol_read_write ...passed 00:09:24.379 Test: ut_vbdev_lvol_submit_request ...passed 00:09:24.379 Test: ut_lvol_examine_config ...passed 00:09:24.379 Test: ut_lvol_examine_disk ...[2024-11-20 07:19:20.705329] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_rename ...[2024-11-20 07:19:20.706419] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:09:24.379 [2024-11-20 07:19:20.706486] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:09:24.379 passed 00:09:24.379 Test: ut_bdev_finish ...passed 00:09:24.379 Test: ut_lvs_rename ...passed 00:09:24.379 Test: ut_lvol_seek ...passed 00:09:24.379 Test: ut_esnap_dev_create ...[2024-11-20 07:19:20.707205] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:09:24.379 [2024-11-20 07:19:20.707253] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:09:24.379 [2024-11-20 07:19:20.707282] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_esnap_clone_bad_args ...[2024-11-20 07:19:20.707398] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:09:24.379 [2024-11-20 07:19:20.707432] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_shallow_copy ...[2024-11-20 07:19:20.707745] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:24.379 [2024-11-20 07:19:20.707782] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:09:24.379 passed 00:09:24.379 Test: ut_lvol_set_external_parent ...passed 00:09:24.379 00:09:24.379 [2024-11-20 07:19:20.707876] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:09:24.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.379 suites 1 1 n/a 0 0 00:09:24.379 tests 23 23 23 0 0 00:09:24.379 asserts 770 770 770 0 n/a 00:09:24.379 00:09:24.379 Elapsed time = 0.005 seconds 00:09:24.379 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:09:24.379 00:09:24.379 00:09:24.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.379 http://cunit.sourceforge.net/ 00:09:24.379 00:09:24.379 00:09:24.379 Suite: zone_block 00:09:24.379 Test: test_zone_block_create ...passed 00:09:24.380 Test: test_zone_block_create_invalid ...[2024-11-20 07:19:20.785336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:09:24.380 [2024-11-20 07:19:20.785555] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-20 07:19:20.785721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:09:24.380 [2024-11-20 07:19:20.785774] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-20 07:19:20.785936] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:09:24.380 [2024-11-20 07:19:20.785971] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-20 07:19:20.786053] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:09:24.380 [2024-11-20 07:19:20.786086] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:09:24.380 Test: test_get_zone_info ...[2024-11-20 07:19:20.786688] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.786752] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.786788] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_supported_io_types ...passed 00:09:24.380 Test: test_reset_zone ...[2024-11-20 07:19:20.787550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.787601] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_open_zone ...[2024-11-20 07:19:20.788092] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.788760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.788805] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_zone_write ...[2024-11-20 07:19:20.789327] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:24.380 [2024-11-20 07:19:20.789364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.789408] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:24.380 [2024-11-20 07:19:20.789426] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.795707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:09:24.380 [2024-11-20 07:19:20.795776] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.795832] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:09:24.380 [2024-11-20 07:19:20.795855] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.801512] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:24.380 [2024-11-20 07:19:20.801575] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_zone_read ...[2024-11-20 07:19:20.802030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:09:24.380 [2024-11-20 07:19:20.802060] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.802106] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:09:24.380 [2024-11-20 07:19:20.802126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.802520] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:09:24.380 [2024-11-20 07:19:20.802561] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_close_zone ...[2024-11-20 07:19:20.802867] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.802921] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.803083] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.803134] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_finish_zone ...[2024-11-20 07:19:20.803651] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.803715] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 Test: test_append_zone ...[2024-11-20 07:19:20.804081] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:24.380 [2024-11-20 07:19:20.804110] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 [2024-11-20 07:19:20.804146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:24.380 [2024-11-20 07:19:20.804163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 passed 00:09:24.380 00:09:24.380 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.380 suites 1 1 n/a 0 0 00:09:24.380 tests 11 11 11 0 0 00:09:24.380 asserts 3437 3437 3437 0 n/a 00:09:24.380 00:09:24.380 Elapsed time = 0.030 seconds 00:09:24.380 [2024-11-20 07:19:20.814593] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:24.380 [2024-11-20 07:19:20.814665] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:24.380 07:19:20 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:09:24.380 00:09:24.380 00:09:24.380 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.380 http://cunit.sourceforge.net/ 00:09:24.380 00:09:24.380 00:09:24.380 Suite: bdev 00:09:24.380 Test: basic ...[2024-11-20 07:19:20.915787] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x648dafb58a41): Operation not permitted (rc=-1) 00:09:24.380 [2024-11-20 07:19:20.916066] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x648dafb58a00): Operation not permitted (rc=-1) 00:09:24.380 [2024-11-20 07:19:20.916110] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x648dafb58a41): Operation not permitted (rc=-1) 00:09:24.380 passed 00:09:24.380 Test: unregister_and_close ...passed 00:09:24.380 Test: unregister_and_close_different_threads ...passed 00:09:24.380 Test: basic_qos ...passed 00:09:24.380 Test: put_channel_during_reset ...passed 00:09:24.380 Test: aborted_reset ...passed 00:09:24.380 Test: aborted_reset_no_outstanding_io ...passed 00:09:24.380 Test: io_during_reset ...passed 00:09:24.380 Test: reset_completions ...passed 00:09:24.380 Test: io_during_qos_queue ...passed 00:09:24.380 Test: io_during_qos_reset ...passed 00:09:24.380 Test: enomem ...passed 00:09:24.380 Test: enomem_multi_bdev ...passed 00:09:24.380 Test: enomem_multi_bdev_unregister ...passed 00:09:24.380 Test: enomem_multi_io_target ...passed 00:09:24.380 Test: qos_dynamic_enable ...passed 00:09:24.380 Test: bdev_histograms_mt ...passed 00:09:24.380 Test: bdev_set_io_timeout_mt ...[2024-11-20 07:19:21.467323] thread.c: 484:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:09:24.380 passed 00:09:24.381 Test: lock_lba_range_then_submit_io ...[2024-11-20 07:19:21.474727] thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x648dafb589c0 already registered (old:0x5130000003c0 new:0x513000000c80) 00:09:24.381 passed 00:09:24.381 Test: unregister_during_reset ...passed 00:09:24.381 Test: event_notify_and_close ...passed 00:09:24.381 Test: unregister_and_qos_poller ...passed 00:09:24.381 Test: reset_start_complete_race ...passed 00:09:24.381 Suite: bdev_wrong_thread 00:09:24.381 Test: spdk_bdev_register_wt ...[2024-11-20 07:19:21.620980] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x519000002880 (0x519000002880) 00:09:24.381 passed 00:09:24.381 Test: spdk_bdev_examine_wt ...[2024-11-20 07:19:21.621268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 832:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x519000002880 (0x519000002880) 00:09:24.381 passed 00:09:24.381 00:09:24.381 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.381 suites 2 2 n/a 0 0 00:09:24.381 tests 25 25 25 0 0 00:09:24.381 asserts 637 637 637 0 n/a 00:09:24.381 00:09:24.381 Elapsed time = 0.717 seconds 00:09:24.381 00:09:24.381 real 0m2.858s 00:09:24.381 user 0m1.299s 00:09:24.381 sys 0m1.564s 00:09:24.381 07:19:21 unittest.unittest_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.381 07:19:21 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 ************************************ 00:09:24.381 END TEST unittest_bdev 00:09:24.381 ************************************ 00:09:24.381 07:19:21 unittest -- unit/unittest.sh@197 -- # [[ n == y ]] 00:09:24.381 07:19:21 unittest -- unit/unittest.sh@202 -- # [[ n == y ]] 00:09:24.381 07:19:21 unittest -- unit/unittest.sh@207 -- # [[ n == y ]] 00:09:24.381 07:19:21 unittest -- unit/unittest.sh@211 -- # [[ n == y ]] 00:09:24.381 07:19:21 unittest -- unit/unittest.sh@215 -- # run_test unittest_blob_blobfs unittest_blob 00:09:24.381 07:19:21 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.381 07:19:21 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.381 07:19:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 ************************************ 00:09:24.381 START TEST unittest_blob_blobfs 00:09:24.381 ************************************ 00:09:24.381 07:19:21 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1129 -- # unittest_blob 00:09:24.381 07:19:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:09:24.381 07:19:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:09:24.381 00:09:24.381 00:09:24.381 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.381 http://cunit.sourceforge.net/ 00:09:24.381 00:09:24.381 00:09:24.381 Suite: blob_nocopy_noextent 00:09:24.381 Test: blob_init ...[2024-11-20 07:19:21.749733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:24.381 passed 00:09:24.381 Test: blob_thin_provision ...passed 00:09:24.381 Test: blob_read_only ...passed 00:09:24.381 Test: bs_load ...[2024-11-20 07:19:21.835105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:24.381 passed 00:09:24.381 Test: bs_load_custom_cluster_size ...passed 00:09:24.381 Test: bs_load_after_failed_grow ...passed 00:09:24.381 Test: bs_cluster_sz ...[2024-11-20 07:19:21.862648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:24.381 [2024-11-20 07:19:21.863071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:24.381 [2024-11-20 07:19:21.863162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:24.381 passed 00:09:24.381 Test: bs_resize_md ...passed 00:09:24.381 Test: bs_destroy ...passed 00:09:24.381 Test: bs_type ...passed 00:09:24.381 Test: bs_super_block ...passed 00:09:24.381 Test: bs_test_recover_cluster_count ...passed 00:09:24.381 Test: bs_grow_live ...passed 00:09:24.381 Test: bs_grow_live_no_space ...passed 00:09:24.381 Test: bs_test_grow ...passed 00:09:24.381 Test: blob_serialize_test ...passed 00:09:24.381 Test: super_block_crc ...passed 00:09:24.381 Test: blob_thin_prov_write_count_io ...passed 00:09:24.381 Test: blob_thin_prov_unmap_cluster ...passed 00:09:24.381 Test: bs_load_iter_test ...passed 00:09:24.381 Test: blob_relations ...[2024-11-20 07:19:22.092428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.092538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.093537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.093586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 passed 00:09:24.381 Test: blob_relations2 ...[2024-11-20 07:19:22.108478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.108564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.108608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.108620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.110170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.110221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.110716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.381 [2024-11-20 07:19:22.110763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 passed 00:09:24.381 Test: blob_relations3 ...passed 00:09:24.381 Test: blobstore_clean_power_failure ...passed 00:09:24.381 Test: blob_delete_snapshot_power_failure ...[2024-11-20 07:19:22.264465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:24.381 [2024-11-20 07:19:22.276940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:24.381 [2024-11-20 07:19:22.277034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:24.381 [2024-11-20 07:19:22.277060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.289357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:24.381 [2024-11-20 07:19:22.289435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:24.381 [2024-11-20 07:19:22.289474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:24.381 [2024-11-20 07:19:22.289499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.301951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:24.381 [2024-11-20 07:19:22.302057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.314602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:24.381 [2024-11-20 07:19:22.314756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 [2024-11-20 07:19:22.327296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:24.381 [2024-11-20 07:19:22.327401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.381 passed 00:09:24.381 Test: blob_create_snapshot_power_failure ...[2024-11-20 07:19:22.363925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:24.381 [2024-11-20 07:19:22.387655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:24.381 [2024-11-20 07:19:22.399962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:24.381 passed 00:09:24.381 Test: blob_io_unit ...passed 00:09:24.381 Test: blob_io_unit_compatibility ...passed 00:09:24.381 Test: blob_ext_md_pages ...passed 00:09:24.381 Test: blob_esnap_io_4096_4096 ...passed 00:09:24.381 Test: blob_esnap_io_512_512 ...passed 00:09:24.381 Test: blob_esnap_io_4096_512 ...passed 00:09:24.381 Test: blob_esnap_io_512_4096 ...passed 00:09:24.381 Test: blob_esnap_clone_resize ...passed 00:09:24.381 Suite: blob_bs_nocopy_noextent 00:09:24.381 Test: blob_open ...passed 00:09:24.381 Test: blob_create ...[2024-11-20 07:19:22.675660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:24.381 passed 00:09:24.381 Test: blob_create_loop ...passed 00:09:24.381 Test: blob_create_fail ...[2024-11-20 07:19:22.770866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.381 passed 00:09:24.381 Test: blob_create_internal ...passed 00:09:24.381 Test: blob_create_zero_extent ...passed 00:09:24.381 Test: blob_snapshot ...passed 00:09:24.381 Test: blob_clone ...passed 00:09:24.381 Test: blob_inflate ...[2024-11-20 07:19:22.950236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:24.381 passed 00:09:24.381 Test: blob_delete ...passed 00:09:24.381 Test: blob_resize_test ...[2024-11-20 07:19:23.014776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:24.381 passed 00:09:24.381 Test: blob_resize_thin_test ...passed 00:09:24.381 Test: channel_ops ...passed 00:09:24.381 Test: blob_super ...passed 00:09:24.381 Test: blob_rw_verify_iov ...passed 00:09:24.381 Test: blob_unmap ...passed 00:09:24.381 Test: blob_iter ...passed 00:09:24.381 Test: blob_parse_md ...passed 00:09:24.381 Test: bs_load_pending_removal ...passed 00:09:24.382 Test: bs_unload ...[2024-11-20 07:19:23.310404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:24.382 passed 00:09:24.382 Test: bs_usable_clusters ...passed 00:09:24.382 Test: blob_crc ...[2024-11-20 07:19:23.375277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:24.382 [2024-11-20 07:19:23.375401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:24.382 passed 00:09:24.382 Test: blob_flags ...passed 00:09:24.382 Test: bs_version ...passed 00:09:24.382 Test: blob_set_xattrs_test ...[2024-11-20 07:19:23.474652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.382 [2024-11-20 07:19:23.474762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.382 passed 00:09:24.382 Test: blob_thin_prov_alloc ...passed 00:09:24.382 Test: blob_insert_cluster_msg_test ...passed 00:09:24.382 Test: blob_thin_prov_rw ...passed 00:09:24.382 Test: blob_thin_prov_rle ...passed 00:09:24.382 Test: blob_thin_prov_rw_iov ...passed 00:09:24.382 Test: blob_snapshot_rw ...passed 00:09:24.382 Test: blob_snapshot_rw_iov ...passed 00:09:24.382 Test: blob_inflate_rw ...passed 00:09:24.382 Test: blob_snapshot_freeze_io ...passed 00:09:24.382 Test: blob_operation_split_rw ...passed 00:09:24.382 Test: blob_operation_split_rw_iov ...passed 00:09:24.382 Test: blob_simultaneous_operations ...[2024-11-20 07:19:24.449514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.382 [2024-11-20 07:19:24.449604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:24.450952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.382 [2024-11-20 07:19:24.451003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:24.464184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.382 [2024-11-20 07:19:24.464262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:24.464401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.382 [2024-11-20 07:19:24.464417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 passed 00:09:24.382 Test: blob_persist_test ...passed 00:09:24.382 Test: blob_decouple_snapshot ...passed 00:09:24.382 Test: blob_seek_io_unit ...passed 00:09:24.382 Test: blob_nested_freezes ...passed 00:09:24.382 Test: blob_clone_resize ...passed 00:09:24.382 Test: blob_shallow_copy ...[2024-11-20 07:19:24.743319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:24.382 [2024-11-20 07:19:24.743609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:24.382 [2024-11-20 07:19:24.743766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:24.382 passed 00:09:24.382 Suite: blob_blob_nocopy_noextent 00:09:24.382 Test: blob_write ...passed 00:09:24.382 Test: blob_read ...passed 00:09:24.382 Test: blob_rw_verify ...passed 00:09:24.382 Test: blob_rw_verify_iov_nomem ...passed 00:09:24.382 Test: blob_rw_iov_read_only ...passed 00:09:24.382 Test: blob_xattr ...passed 00:09:24.382 Test: blob_dirty_shutdown ...passed 00:09:24.382 Test: blob_is_degraded ...passed 00:09:24.382 Suite: blob_esnap_bs_nocopy_noextent 00:09:24.382 Test: blob_esnap_create ...passed 00:09:24.382 Test: blob_esnap_thread_add_remove ...passed 00:09:24.382 Test: blob_esnap_clone_snapshot ...passed 00:09:24.382 Test: blob_esnap_clone_inflate ...passed 00:09:24.382 Test: blob_esnap_clone_decouple ...passed 00:09:24.382 Test: blob_esnap_clone_reload ...passed 00:09:24.382 Test: blob_esnap_hotplug ...passed 00:09:24.382 Test: blob_set_parent ...[2024-11-20 07:19:25.289437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:24.382 [2024-11-20 07:19:25.289525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:24.382 [2024-11-20 07:19:25.289654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:24.382 [2024-11-20 07:19:25.289710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:24.382 [2024-11-20 07:19:25.290297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:24.382 passed 00:09:24.382 Test: blob_set_external_parent ...[2024-11-20 07:19:25.322999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:24.382 [2024-11-20 07:19:25.323084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:24.382 [2024-11-20 07:19:25.323107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:24.382 [2024-11-20 07:19:25.323641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:24.382 passed 00:09:24.382 Suite: blob_nocopy_extent 00:09:24.382 Test: blob_init ...[2024-11-20 07:19:25.334970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:24.382 passed 00:09:24.382 Test: blob_thin_provision ...passed 00:09:24.382 Test: blob_read_only ...passed 00:09:24.382 Test: bs_load ...[2024-11-20 07:19:25.380555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:24.382 passed 00:09:24.382 Test: bs_load_custom_cluster_size ...passed 00:09:24.382 Test: bs_load_after_failed_grow ...passed 00:09:24.382 Test: bs_cluster_sz ...[2024-11-20 07:19:25.406132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:24.382 [2024-11-20 07:19:25.406412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:24.382 [2024-11-20 07:19:25.406465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:24.382 passed 00:09:24.382 Test: bs_resize_md ...passed 00:09:24.382 Test: bs_destroy ...passed 00:09:24.382 Test: bs_type ...passed 00:09:24.382 Test: bs_super_block ...passed 00:09:24.382 Test: bs_test_recover_cluster_count ...passed 00:09:24.382 Test: bs_grow_live ...passed 00:09:24.382 Test: bs_grow_live_no_space ...passed 00:09:24.382 Test: bs_test_grow ...passed 00:09:24.382 Test: blob_serialize_test ...passed 00:09:24.382 Test: super_block_crc ...passed 00:09:24.382 Test: blob_thin_prov_write_count_io ...passed 00:09:24.382 Test: blob_thin_prov_unmap_cluster ...passed 00:09:24.382 Test: bs_load_iter_test ...passed 00:09:24.382 Test: blob_relations ...[2024-11-20 07:19:25.622804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.622890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.623949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.624009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 passed 00:09:24.382 Test: blob_relations2 ...[2024-11-20 07:19:25.639753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.639834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.639858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.639869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.641455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.641518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.642022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.382 [2024-11-20 07:19:25.642069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 passed 00:09:24.382 Test: blob_relations3 ...passed 00:09:24.382 Test: blobstore_clean_power_failure ...passed 00:09:24.382 Test: blob_delete_snapshot_power_failure ...[2024-11-20 07:19:25.798361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:24.382 [2024-11-20 07:19:25.810931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:24.382 [2024-11-20 07:19:25.823400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:24.382 [2024-11-20 07:19:25.823479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:24.382 [2024-11-20 07:19:25.823508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.835922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:24.382 [2024-11-20 07:19:25.836020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:24.382 [2024-11-20 07:19:25.836040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:24.382 [2024-11-20 07:19:25.836059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.382 [2024-11-20 07:19:25.849156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:24.382 [2024-11-20 07:19:25.849254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:24.382 [2024-11-20 07:19:25.849293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:24.382 [2024-11-20 07:19:25.849316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:25.861769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:24.383 [2024-11-20 07:19:25.861875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:25.874328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:24.383 [2024-11-20 07:19:25.874454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:25.887262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:24.383 [2024-11-20 07:19:25.887356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 passed 00:09:24.383 Test: blob_create_snapshot_power_failure ...[2024-11-20 07:19:25.924562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:24.383 [2024-11-20 07:19:25.936548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:24.383 [2024-11-20 07:19:25.960925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:24.383 [2024-11-20 07:19:25.973385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:24.383 passed 00:09:24.383 Test: blob_io_unit ...passed 00:09:24.383 Test: blob_io_unit_compatibility ...passed 00:09:24.383 Test: blob_ext_md_pages ...passed 00:09:24.383 Test: blob_esnap_io_4096_4096 ...passed 00:09:24.383 Test: blob_esnap_io_512_512 ...passed 00:09:24.383 Test: blob_esnap_io_4096_512 ...passed 00:09:24.383 Test: blob_esnap_io_512_4096 ...passed 00:09:24.383 Test: blob_esnap_clone_resize ...passed 00:09:24.383 Suite: blob_bs_nocopy_extent 00:09:24.383 Test: blob_open ...passed 00:09:24.383 Test: blob_create ...[2024-11-20 07:19:26.251539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:24.383 passed 00:09:24.383 Test: blob_create_loop ...passed 00:09:24.383 Test: blob_create_fail ...[2024-11-20 07:19:26.358631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.383 passed 00:09:24.383 Test: blob_create_internal ...passed 00:09:24.383 Test: blob_create_zero_extent ...passed 00:09:24.383 Test: blob_snapshot ...passed 00:09:24.383 Test: blob_clone ...passed 00:09:24.383 Test: blob_inflate ...[2024-11-20 07:19:26.542192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:24.383 passed 00:09:24.383 Test: blob_delete ...passed 00:09:24.383 Test: blob_resize_test ...[2024-11-20 07:19:26.608874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:24.383 passed 00:09:24.383 Test: blob_resize_thin_test ...passed 00:09:24.383 Test: channel_ops ...passed 00:09:24.383 Test: blob_super ...passed 00:09:24.383 Test: blob_rw_verify_iov ...passed 00:09:24.383 Test: blob_unmap ...passed 00:09:24.383 Test: blob_iter ...passed 00:09:24.383 Test: blob_parse_md ...passed 00:09:24.383 Test: bs_load_pending_removal ...passed 00:09:24.383 Test: bs_unload ...[2024-11-20 07:19:26.907381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:24.383 passed 00:09:24.383 Test: bs_usable_clusters ...passed 00:09:24.383 Test: blob_crc ...[2024-11-20 07:19:26.972706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:24.383 [2024-11-20 07:19:26.972818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:24.383 passed 00:09:24.383 Test: blob_flags ...passed 00:09:24.383 Test: bs_version ...passed 00:09:24.383 Test: blob_set_xattrs_test ...[2024-11-20 07:19:27.072081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.383 [2024-11-20 07:19:27.072159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:24.383 passed 00:09:24.383 Test: blob_thin_prov_alloc ...passed 00:09:24.383 Test: blob_insert_cluster_msg_test ...passed 00:09:24.383 Test: blob_thin_prov_rw ...passed 00:09:24.383 Test: blob_thin_prov_rle ...passed 00:09:24.383 Test: blob_thin_prov_rw_iov ...passed 00:09:24.383 Test: blob_snapshot_rw ...passed 00:09:24.383 Test: blob_snapshot_rw_iov ...passed 00:09:24.383 Test: blob_inflate_rw ...passed 00:09:24.383 Test: blob_snapshot_freeze_io ...passed 00:09:24.383 Test: blob_operation_split_rw ...passed 00:09:24.383 Test: blob_operation_split_rw_iov ...passed 00:09:24.383 Test: blob_simultaneous_operations ...[2024-11-20 07:19:28.018540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.383 [2024-11-20 07:19:28.018632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:28.019710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.383 [2024-11-20 07:19:28.019751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:28.030866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.383 [2024-11-20 07:19:28.030942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 [2024-11-20 07:19:28.031053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:24.383 [2024-11-20 07:19:28.031067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.383 passed 00:09:24.383 Test: blob_persist_test ...passed 00:09:24.383 Test: blob_decouple_snapshot ...passed 00:09:24.383 Test: blob_seek_io_unit ...passed 00:09:24.383 Test: blob_nested_freezes ...passed 00:09:24.383 Test: blob_clone_resize ...passed 00:09:24.645 Test: blob_shallow_copy ...[2024-11-20 07:19:28.297730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:24.645 [2024-11-20 07:19:28.297980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:24.645 [2024-11-20 07:19:28.298132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:24.645 passed 00:09:24.645 Suite: blob_blob_nocopy_extent 00:09:24.645 Test: blob_write ...passed 00:09:24.645 Test: blob_read ...passed 00:09:24.645 Test: blob_rw_verify ...passed 00:09:24.645 Test: blob_rw_verify_iov_nomem ...passed 00:09:24.645 Test: blob_rw_iov_read_only ...passed 00:09:24.645 Test: blob_xattr ...passed 00:09:24.645 Test: blob_dirty_shutdown ...passed 00:09:24.904 Test: blob_is_degraded ...passed 00:09:24.904 Suite: blob_esnap_bs_nocopy_extent 00:09:24.904 Test: blob_esnap_create ...passed 00:09:24.904 Test: blob_esnap_thread_add_remove ...passed 00:09:24.904 Test: blob_esnap_clone_snapshot ...passed 00:09:24.904 Test: blob_esnap_clone_inflate ...passed 00:09:24.904 Test: blob_esnap_clone_decouple ...passed 00:09:24.904 Test: blob_esnap_clone_reload ...passed 00:09:24.904 Test: blob_esnap_hotplug ...passed 00:09:24.904 Test: blob_set_parent ...[2024-11-20 07:19:28.824970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:24.904 [2024-11-20 07:19:28.825059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:24.904 [2024-11-20 07:19:28.825161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:24.904 [2024-11-20 07:19:28.825186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:24.904 [2024-11-20 07:19:28.825679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:25.164 passed 00:09:25.164 Test: blob_set_external_parent ...[2024-11-20 07:19:28.857984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:25.164 [2024-11-20 07:19:28.858080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:25.164 [2024-11-20 07:19:28.858097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:25.164 [2024-11-20 07:19:28.858458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:25.164 passed 00:09:25.164 Suite: blob_copy_noextent 00:09:25.164 Test: blob_init ...[2024-11-20 07:19:28.869535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:25.164 passed 00:09:25.164 Test: blob_thin_provision ...passed 00:09:25.164 Test: blob_read_only ...passed 00:09:25.164 Test: bs_load ...[2024-11-20 07:19:28.913402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:25.164 passed 00:09:25.164 Test: bs_load_custom_cluster_size ...passed 00:09:25.164 Test: bs_load_after_failed_grow ...passed 00:09:25.164 Test: bs_cluster_sz ...[2024-11-20 07:19:28.936935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:25.164 [2024-11-20 07:19:28.937133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:25.164 [2024-11-20 07:19:28.937169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:25.164 passed 00:09:25.164 Test: bs_resize_md ...passed 00:09:25.164 Test: bs_destroy ...passed 00:09:25.164 Test: bs_type ...passed 00:09:25.164 Test: bs_super_block ...passed 00:09:25.164 Test: bs_test_recover_cluster_count ...passed 00:09:25.164 Test: bs_grow_live ...passed 00:09:25.164 Test: bs_grow_live_no_space ...passed 00:09:25.164 Test: bs_test_grow ...passed 00:09:25.164 Test: blob_serialize_test ...passed 00:09:25.164 Test: super_block_crc ...passed 00:09:25.164 Test: blob_thin_prov_write_count_io ...passed 00:09:25.423 Test: blob_thin_prov_unmap_cluster ...passed 00:09:25.423 Test: bs_load_iter_test ...passed 00:09:25.423 Test: blob_relations ...[2024-11-20 07:19:29.129489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.129565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.130145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.130176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 passed 00:09:25.423 Test: blob_relations2 ...[2024-11-20 07:19:29.143549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.143613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.143635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.143647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.144610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.144654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.144962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:25.423 [2024-11-20 07:19:29.144996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 passed 00:09:25.423 Test: blob_relations3 ...passed 00:09:25.423 Test: blobstore_clean_power_failure ...passed 00:09:25.423 Test: blob_delete_snapshot_power_failure ...[2024-11-20 07:19:29.294307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:25.423 [2024-11-20 07:19:29.305949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:25.423 [2024-11-20 07:19:29.306045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:25.423 [2024-11-20 07:19:29.306066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.317630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:25.423 [2024-11-20 07:19:29.317717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:25.423 [2024-11-20 07:19:29.317733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:25.423 [2024-11-20 07:19:29.317767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.329336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:25.423 [2024-11-20 07:19:29.329427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.423 [2024-11-20 07:19:29.340828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:25.423 [2024-11-20 07:19:29.340945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.681 [2024-11-20 07:19:29.352428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:25.681 [2024-11-20 07:19:29.352523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.681 passed 00:09:25.681 Test: blob_create_snapshot_power_failure ...[2024-11-20 07:19:29.386749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:25.681 [2024-11-20 07:19:29.408911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:25.681 [2024-11-20 07:19:29.420453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:25.681 passed 00:09:25.681 Test: blob_io_unit ...passed 00:09:25.681 Test: blob_io_unit_compatibility ...passed 00:09:25.681 Test: blob_ext_md_pages ...passed 00:09:25.681 Test: blob_esnap_io_4096_4096 ...passed 00:09:25.681 Test: blob_esnap_io_512_512 ...passed 00:09:25.681 Test: blob_esnap_io_4096_512 ...passed 00:09:25.681 Test: blob_esnap_io_512_4096 ...passed 00:09:25.939 Test: blob_esnap_clone_resize ...passed 00:09:25.939 Suite: blob_bs_copy_noextent 00:09:25.939 Test: blob_open ...passed 00:09:25.939 Test: blob_create ...[2024-11-20 07:19:29.684885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:25.939 passed 00:09:25.939 Test: blob_create_loop ...passed 00:09:25.939 Test: blob_create_fail ...[2024-11-20 07:19:29.773753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:25.939 passed 00:09:25.939 Test: blob_create_internal ...passed 00:09:25.939 Test: blob_create_zero_extent ...passed 00:09:26.199 Test: blob_snapshot ...passed 00:09:26.199 Test: blob_clone ...passed 00:09:26.199 Test: blob_inflate ...[2024-11-20 07:19:29.940709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:26.199 passed 00:09:26.199 Test: blob_delete ...passed 00:09:26.199 Test: blob_resize_test ...[2024-11-20 07:19:30.003783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:26.199 passed 00:09:26.199 Test: blob_resize_thin_test ...passed 00:09:26.199 Test: channel_ops ...passed 00:09:26.199 Test: blob_super ...passed 00:09:26.458 Test: blob_rw_verify_iov ...passed 00:09:26.458 Test: blob_unmap ...passed 00:09:26.458 Test: blob_iter ...passed 00:09:26.458 Test: blob_parse_md ...passed 00:09:26.458 Test: bs_load_pending_removal ...passed 00:09:26.458 Test: bs_unload ...[2024-11-20 07:19:30.301971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:26.458 passed 00:09:26.458 Test: bs_usable_clusters ...passed 00:09:26.458 Test: blob_crc ...[2024-11-20 07:19:30.366774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:26.458 [2024-11-20 07:19:30.366901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:26.458 passed 00:09:26.718 Test: blob_flags ...passed 00:09:26.718 Test: bs_version ...passed 00:09:26.718 Test: blob_set_xattrs_test ...[2024-11-20 07:19:30.465811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:26.718 [2024-11-20 07:19:30.465906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:26.718 passed 00:09:26.718 Test: blob_thin_prov_alloc ...passed 00:09:26.982 Test: blob_insert_cluster_msg_test ...passed 00:09:26.982 Test: blob_thin_prov_rw ...passed 00:09:26.982 Test: blob_thin_prov_rle ...passed 00:09:26.982 Test: blob_thin_prov_rw_iov ...passed 00:09:26.982 Test: blob_snapshot_rw ...passed 00:09:26.982 Test: blob_snapshot_rw_iov ...passed 00:09:27.261 Test: blob_inflate_rw ...passed 00:09:27.261 Test: blob_snapshot_freeze_io ...passed 00:09:27.520 Test: blob_operation_split_rw ...passed 00:09:27.520 Test: blob_operation_split_rw_iov ...passed 00:09:27.520 Test: blob_simultaneous_operations ...[2024-11-20 07:19:31.380196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:27.520 [2024-11-20 07:19:31.380284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.520 [2024-11-20 07:19:31.380742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:27.520 [2024-11-20 07:19:31.380781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.520 [2024-11-20 07:19:31.383521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:27.520 [2024-11-20 07:19:31.383579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.520 [2024-11-20 07:19:31.383659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:27.520 [2024-11-20 07:19:31.383670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.520 passed 00:09:27.520 Test: blob_persist_test ...passed 00:09:27.779 Test: blob_decouple_snapshot ...passed 00:09:27.779 Test: blob_seek_io_unit ...passed 00:09:27.779 Test: blob_nested_freezes ...passed 00:09:27.779 Test: blob_clone_resize ...passed 00:09:27.779 Test: blob_shallow_copy ...[2024-11-20 07:19:31.611133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:27.779 [2024-11-20 07:19:31.611396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:27.779 [2024-11-20 07:19:31.611520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:27.779 passed 00:09:27.779 Suite: blob_blob_copy_noextent 00:09:27.779 Test: blob_write ...passed 00:09:27.779 Test: blob_read ...passed 00:09:28.036 Test: blob_rw_verify ...passed 00:09:28.036 Test: blob_rw_verify_iov_nomem ...passed 00:09:28.036 Test: blob_rw_iov_read_only ...passed 00:09:28.036 Test: blob_xattr ...passed 00:09:28.036 Test: blob_dirty_shutdown ...passed 00:09:28.036 Test: blob_is_degraded ...passed 00:09:28.036 Suite: blob_esnap_bs_copy_noextent 00:09:28.036 Test: blob_esnap_create ...passed 00:09:28.036 Test: blob_esnap_thread_add_remove ...passed 00:09:28.295 Test: blob_esnap_clone_snapshot ...passed 00:09:28.295 Test: blob_esnap_clone_inflate ...passed 00:09:28.295 Test: blob_esnap_clone_decouple ...passed 00:09:28.295 Test: blob_esnap_clone_reload ...passed 00:09:28.295 Test: blob_esnap_hotplug ...passed 00:09:28.295 Test: blob_set_parent ...[2024-11-20 07:19:32.141150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:28.295 [2024-11-20 07:19:32.141239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:28.295 [2024-11-20 07:19:32.141333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:28.295 [2024-11-20 07:19:32.141355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:28.295 [2024-11-20 07:19:32.141759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:28.295 passed 00:09:28.295 Test: blob_set_external_parent ...[2024-11-20 07:19:32.173505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:28.295 [2024-11-20 07:19:32.173586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:28.295 [2024-11-20 07:19:32.173603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:28.295 [2024-11-20 07:19:32.173935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:28.295 passed 00:09:28.295 Suite: blob_copy_extent 00:09:28.295 Test: blob_init ...[2024-11-20 07:19:32.185052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:28.295 passed 00:09:28.295 Test: blob_thin_provision ...passed 00:09:28.295 Test: blob_read_only ...passed 00:09:28.553 Test: bs_load ...[2024-11-20 07:19:32.231475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:28.553 passed 00:09:28.553 Test: bs_load_custom_cluster_size ...passed 00:09:28.553 Test: bs_load_after_failed_grow ...passed 00:09:28.553 Test: bs_cluster_sz ...[2024-11-20 07:19:32.255125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:28.553 [2024-11-20 07:19:32.255305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:28.553 [2024-11-20 07:19:32.255339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:28.553 passed 00:09:28.553 Test: bs_resize_md ...passed 00:09:28.553 Test: bs_destroy ...passed 00:09:28.553 Test: bs_type ...passed 00:09:28.553 Test: bs_super_block ...passed 00:09:28.553 Test: bs_test_recover_cluster_count ...passed 00:09:28.553 Test: bs_grow_live ...passed 00:09:28.553 Test: bs_grow_live_no_space ...passed 00:09:28.553 Test: bs_test_grow ...passed 00:09:28.553 Test: blob_serialize_test ...passed 00:09:28.553 Test: super_block_crc ...passed 00:09:28.553 Test: blob_thin_prov_write_count_io ...passed 00:09:28.553 Test: blob_thin_prov_unmap_cluster ...passed 00:09:28.553 Test: bs_load_iter_test ...passed 00:09:28.553 Test: blob_relations ...[2024-11-20 07:19:32.441690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.441792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 [2024-11-20 07:19:32.442423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.442456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 passed 00:09:28.553 Test: blob_relations2 ...[2024-11-20 07:19:32.456252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.456341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 [2024-11-20 07:19:32.456363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.456372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 [2024-11-20 07:19:32.457395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.457438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 [2024-11-20 07:19:32.457750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:28.553 [2024-11-20 07:19:32.457783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.553 passed 00:09:28.553 Test: blob_relations3 ...passed 00:09:28.811 Test: blobstore_clean_power_failure ...passed 00:09:28.811 Test: blob_delete_snapshot_power_failure ...[2024-11-20 07:19:32.606816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:28.811 [2024-11-20 07:19:32.618499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:28.811 [2024-11-20 07:19:32.630332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:28.811 [2024-11-20 07:19:32.630416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:28.811 [2024-11-20 07:19:32.630435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.811 [2024-11-20 07:19:32.642371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:28.811 [2024-11-20 07:19:32.642450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:28.811 [2024-11-20 07:19:32.642467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:28.811 [2024-11-20 07:19:32.642485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.811 [2024-11-20 07:19:32.654340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:28.811 [2024-11-20 07:19:32.654416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:28.811 [2024-11-20 07:19:32.654432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:28.811 [2024-11-20 07:19:32.654468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.811 [2024-11-20 07:19:32.666328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:28.811 [2024-11-20 07:19:32.666428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.811 [2024-11-20 07:19:32.678388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:28.811 [2024-11-20 07:19:32.678497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.812 [2024-11-20 07:19:32.690439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:28.812 [2024-11-20 07:19:32.690540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.812 passed 00:09:28.812 Test: blob_create_snapshot_power_failure ...[2024-11-20 07:19:32.726124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:28.812 [2024-11-20 07:19:32.737723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:29.068 [2024-11-20 07:19:32.760713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:29.068 [2024-11-20 07:19:32.772567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:29.068 passed 00:09:29.068 Test: blob_io_unit ...passed 00:09:29.068 Test: blob_io_unit_compatibility ...passed 00:09:29.068 Test: blob_ext_md_pages ...passed 00:09:29.068 Test: blob_esnap_io_4096_4096 ...passed 00:09:29.068 Test: blob_esnap_io_512_512 ...passed 00:09:29.068 Test: blob_esnap_io_4096_512 ...passed 00:09:29.068 Test: blob_esnap_io_512_4096 ...passed 00:09:29.068 Test: blob_esnap_clone_resize ...passed 00:09:29.068 Suite: blob_bs_copy_extent 00:09:29.324 Test: blob_open ...passed 00:09:29.324 Test: blob_create ...[2024-11-20 07:19:33.041792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:29.324 passed 00:09:29.324 Test: blob_create_loop ...passed 00:09:29.324 Test: blob_create_fail ...[2024-11-20 07:19:33.137890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:29.324 passed 00:09:29.324 Test: blob_create_internal ...passed 00:09:29.324 Test: blob_create_zero_extent ...passed 00:09:29.324 Test: blob_snapshot ...passed 00:09:29.582 Test: blob_clone ...passed 00:09:29.582 Test: blob_inflate ...[2024-11-20 07:19:33.303567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:29.582 passed 00:09:29.582 Test: blob_delete ...passed 00:09:29.582 Test: blob_resize_test ...[2024-11-20 07:19:33.366880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:29.582 passed 00:09:29.582 Test: blob_resize_thin_test ...passed 00:09:29.582 Test: channel_ops ...passed 00:09:29.582 Test: blob_super ...passed 00:09:29.839 Test: blob_rw_verify_iov ...passed 00:09:29.839 Test: blob_unmap ...passed 00:09:29.839 Test: blob_iter ...passed 00:09:29.839 Test: blob_parse_md ...passed 00:09:29.840 Test: bs_load_pending_removal ...passed 00:09:29.840 Test: bs_unload ...[2024-11-20 07:19:33.655485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:29.840 passed 00:09:29.840 Test: bs_usable_clusters ...passed 00:09:29.840 Test: blob_crc ...[2024-11-20 07:19:33.718520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:29.840 [2024-11-20 07:19:33.718608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:29.840 passed 00:09:29.840 Test: blob_flags ...passed 00:09:30.098 Test: bs_version ...passed 00:09:30.098 Test: blob_set_xattrs_test ...[2024-11-20 07:19:33.815820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.098 [2024-11-20 07:19:33.815901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.098 passed 00:09:30.098 Test: blob_thin_prov_alloc ...passed 00:09:30.098 Test: blob_insert_cluster_msg_test ...passed 00:09:30.098 Test: blob_thin_prov_rw ...passed 00:09:30.356 Test: blob_thin_prov_rle ...passed 00:09:30.356 Test: blob_thin_prov_rw_iov ...passed 00:09:30.356 Test: blob_snapshot_rw ...passed 00:09:30.356 Test: blob_snapshot_rw_iov ...passed 00:09:30.623 Test: blob_inflate_rw ...passed 00:09:30.623 Test: blob_snapshot_freeze_io ...passed 00:09:30.623 Test: blob_operation_split_rw ...passed 00:09:30.898 Test: blob_operation_split_rw_iov ...passed 00:09:30.898 Test: blob_simultaneous_operations ...[2024-11-20 07:19:34.703478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:30.898 [2024-11-20 07:19:34.703569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.898 [2024-11-20 07:19:34.704052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:30.898 [2024-11-20 07:19:34.704094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.898 [2024-11-20 07:19:34.706795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:30.898 [2024-11-20 07:19:34.706843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.898 [2024-11-20 07:19:34.706937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:30.898 [2024-11-20 07:19:34.706950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.898 passed 00:09:30.898 Test: blob_persist_test ...passed 00:09:30.898 Test: blob_decouple_snapshot ...passed 00:09:31.156 Test: blob_seek_io_unit ...passed 00:09:31.156 Test: blob_nested_freezes ...passed 00:09:31.156 Test: blob_clone_resize ...passed 00:09:31.156 Test: blob_shallow_copy ...[2024-11-20 07:19:34.943547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:31.156 [2024-11-20 07:19:34.943810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:31.156 [2024-11-20 07:19:34.943940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:31.156 passed 00:09:31.156 Suite: blob_blob_copy_extent 00:09:31.156 Test: blob_write ...passed 00:09:31.156 Test: blob_read ...passed 00:09:31.156 Test: blob_rw_verify ...passed 00:09:31.414 Test: blob_rw_verify_iov_nomem ...passed 00:09:31.414 Test: blob_rw_iov_read_only ...passed 00:09:31.414 Test: blob_xattr ...passed 00:09:31.414 Test: blob_dirty_shutdown ...passed 00:09:31.414 Test: blob_is_degraded ...passed 00:09:31.414 Suite: blob_esnap_bs_copy_extent 00:09:31.414 Test: blob_esnap_create ...passed 00:09:31.414 Test: blob_esnap_thread_add_remove ...passed 00:09:31.414 Test: blob_esnap_clone_snapshot ...passed 00:09:31.673 Test: blob_esnap_clone_inflate ...passed 00:09:31.673 Test: blob_esnap_clone_decouple ...passed 00:09:31.673 Test: blob_esnap_clone_reload ...passed 00:09:31.673 Test: blob_esnap_hotplug ...passed 00:09:31.673 Test: blob_set_parent ...[2024-11-20 07:19:35.481918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:31.673 [2024-11-20 07:19:35.482011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:31.673 [2024-11-20 07:19:35.482123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:31.673 [2024-11-20 07:19:35.482147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:31.673 [2024-11-20 07:19:35.482637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:31.673 passed 00:09:31.673 Test: blob_set_external_parent ...[2024-11-20 07:19:35.514799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:31.673 [2024-11-20 07:19:35.514891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:31.673 [2024-11-20 07:19:35.514908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:31.673 [2024-11-20 07:19:35.515330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:31.673 passed 00:09:31.673 00:09:31.673 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.673 suites 16 16 n/a 0 0 00:09:31.673 tests 376 376 376 0 0 00:09:31.673 asserts 144129 144129 144129 0 n/a 00:09:31.673 00:09:31.673 Elapsed time = 13.774 seconds 00:09:31.931 07:19:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:31.931 00:09:31.931 00:09:31.931 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.931 http://cunit.sourceforge.net/ 00:09:31.931 00:09:31.931 00:09:31.931 Suite: blob_bdev 00:09:31.931 Test: create_bs_dev ...passed 00:09:31.931 Test: create_bs_dev_ro ...[2024-11-20 07:19:35.653603] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 539:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:31.931 passed 00:09:31.931 Test: create_bs_dev_rw ...passed 00:09:31.931 Test: claim_bs_dev ...[2024-11-20 07:19:35.654132] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 350:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:31.931 passed 00:09:31.931 Test: claim_bs_dev_ro ...passed 00:09:31.931 Test: deferred_destroy_refs ...passed 00:09:31.931 Test: deferred_destroy_channels ...passed 00:09:31.931 Test: deferred_destroy_threads ...passed 00:09:31.931 00:09:31.931 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.931 suites 1 1 n/a 0 0 00:09:31.931 tests 8 8 8 0 0 00:09:31.931 asserts 119 119 119 0 n/a 00:09:31.931 00:09:31.931 Elapsed time = 0.001 seconds 00:09:31.931 07:19:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:31.931 00:09:31.931 00:09:31.931 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.931 http://cunit.sourceforge.net/ 00:09:31.931 00:09:31.931 00:09:31.931 Suite: tree 00:09:31.931 Test: blobfs_tree_op_test ...passed 00:09:31.931 00:09:31.931 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.931 suites 1 1 n/a 0 0 00:09:31.931 tests 1 1 1 0 0 00:09:31.931 asserts 27 27 27 0 n/a 00:09:31.931 00:09:31.931 Elapsed time = 0.000 seconds 00:09:31.931 07:19:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:31.931 00:09:31.931 00:09:31.931 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.931 http://cunit.sourceforge.net/ 00:09:31.931 00:09:31.931 00:09:31.931 Suite: blobfs_async_ut 00:09:31.931 Test: fs_init ...passed 00:09:31.931 Test: fs_open ...passed 00:09:31.931 Test: fs_create ...passed 00:09:31.931 Test: fs_truncate ...passed 00:09:31.931 Test: fs_rename ...[2024-11-20 07:19:35.842732] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:31.931 passed 00:09:31.931 Test: fs_rw_async ...passed 00:09:32.190 Test: fs_writev_readv_async ...passed 00:09:32.190 Test: tree_find_buffer_ut ...passed 00:09:32.190 Test: channel_ops ...passed 00:09:32.190 Test: channel_ops_sync ...passed 00:09:32.190 00:09:32.190 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.190 suites 1 1 n/a 0 0 00:09:32.190 tests 10 10 10 0 0 00:09:32.190 asserts 292 292 292 0 n/a 00:09:32.190 00:09:32.190 Elapsed time = 0.165 seconds 00:09:32.190 07:19:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:32.190 00:09:32.190 00:09:32.190 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.190 http://cunit.sourceforge.net/ 00:09:32.190 00:09:32.190 00:09:32.190 Suite: blobfs_sync_ut 00:09:32.190 Test: cache_read_after_write ...[2024-11-20 07:19:36.010243] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:32.190 passed 00:09:32.190 Test: file_length ...passed 00:09:32.190 Test: append_write_to_extend_blob ...passed 00:09:32.190 Test: partial_buffer ...passed 00:09:32.190 Test: cache_write_null_buffer ...passed 00:09:32.190 Test: fs_create_sync ...passed 00:09:32.190 Test: fs_rename_sync ...passed 00:09:32.190 Test: cache_append_no_cache ...passed 00:09:32.452 Test: fs_delete_file_without_close ...passed 00:09:32.452 00:09:32.452 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.452 suites 1 1 n/a 0 0 00:09:32.452 tests 9 9 9 0 0 00:09:32.452 asserts 345 345 345 0 n/a 00:09:32.452 00:09:32.452 Elapsed time = 0.332 seconds 00:09:32.452 07:19:36 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:32.452 00:09:32.452 00:09:32.452 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.452 http://cunit.sourceforge.net/ 00:09:32.452 00:09:32.452 00:09:32.452 Suite: blobfs_bdev_ut 00:09:32.452 Test: spdk_blobfs_bdev_detect_test ...[2024-11-20 07:19:36.191481] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:32.452 passed 00:09:32.452 Test: spdk_blobfs_bdev_create_test ...[2024-11-20 07:19:36.191997] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:32.452 passed 00:09:32.452 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:32.452 00:09:32.452 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.452 suites 1 1 n/a 0 0 00:09:32.452 tests 3 3 3 0 0 00:09:32.452 asserts 9 9 9 0 n/a 00:09:32.452 00:09:32.452 Elapsed time = 0.001 seconds 00:09:32.452 00:09:32.452 real 0m14.501s 00:09:32.452 user 0m13.824s 00:09:32.452 sys 0m0.852s 00:09:32.452 07:19:36 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.452 07:19:36 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:09:32.452 ************************************ 00:09:32.452 END TEST unittest_blob_blobfs 00:09:32.452 ************************************ 00:09:32.452 07:19:36 unittest -- unit/unittest.sh@216 -- # run_test unittest_event unittest_event 00:09:32.452 07:19:36 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.452 07:19:36 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.452 07:19:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:32.452 ************************************ 00:09:32.452 START TEST unittest_event 00:09:32.452 ************************************ 00:09:32.452 07:19:36 unittest.unittest_event -- common/autotest_common.sh@1129 -- # unittest_event 00:09:32.452 07:19:36 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:32.452 00:09:32.452 00:09:32.452 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.452 http://cunit.sourceforge.net/ 00:09:32.452 00:09:32.452 00:09:32.452 Suite: app_suite 00:09:32.452 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:32.452 00:09:32.452 CPU options: 00:09:32.452 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:32.452 (like [0,1,10]) 00:09:32.452 --lcores lcore to CPU mapping list. The list is in the format: 00:09:32.452 [<,lcores[@CPUs]>...] 00:09:32.452 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:32.452 Within the group, '-' is used for range separator, 00:09:32.452 ',' is used for single number separator. 00:09:32.452 '( )' can be omitted for single element group, 00:09:32.452 '@' can be omitted if cpus and lcores have the same value 00:09:32.452 --disable-cpumask-locks Disable CPU core lock files. 00:09:32.452 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:32.452 pollers in the app support interrupt mode) 00:09:32.452 -p, --main-core main (primary) core for DPDK 00:09:32.452 00:09:32.452 Configuration options: 00:09:32.452 -c, --config, --json JSON config file 00:09:32.452 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:32.452 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:32.452 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:32.452 --rpcs-allowed comma-separated list of permitted RPCS 00:09:32.452 --json-ignore-init-errors don't exit on invalid config entry 00:09:32.452 00:09:32.452 Memory options:app_ut: invalid option -- 'z' 00:09:32.452 00:09:32.452 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:32.452 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:32.452 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:32.452 -R, --huge-unlink unlink huge files after initialization 00:09:32.452 -n, --mem-channels number of memory channels used for DPDK 00:09:32.452 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:32.452 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:32.452 --no-huge run without using hugepages 00:09:32.452 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:32.452 -i, --shm-id shared memory ID (optional) 00:09:32.452 -g, --single-file-segments force creating just one hugetlbfs file 00:09:32.452 00:09:32.452 PCI options: 00:09:32.452 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:32.452 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:32.452 -u, --no-pci disable PCI access 00:09:32.452 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:32.452 00:09:32.452 Log options: 00:09:32.452 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:32.452 --silence-noticelog disable notice level logging to stderr 00:09:32.452 00:09:32.452 Trace options: 00:09:32.452 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:32.452 setting 0 to disable trace (default 32768) 00:09:32.452 Tracepoints vary in size and can use more than one trace entry. 00:09:32.453 -e, --tpoint-group [:] 00:09:32.453 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:32.453 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:32.453 a tracepoint group. First tpoint inside a group can be enabled by 00:09:32.453 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:32.453 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:32.453 in /include/spdk_internal/trace_defs.h 00:09:32.453 00:09:32.453 Other options: 00:09:32.453 -h, --help show this usage 00:09:32.453 -v, --version print SPDK version 00:09:32.453 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:32.453 --env-context Opaque context for use of the env implementation 00:09:32.453 app_ut [options] 00:09:32.453 00:09:32.453 CPU options: 00:09:32.453 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:32.453 (like [0,1,10]) 00:09:32.453 --lcores lcore to CPU mapping list. The list is in the format: 00:09:32.453 [<,lcores[@CPUs]>...] 00:09:32.453 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:32.453 Within the group, '-' is used for range separator, 00:09:32.453 ',' is used for single number separator. 00:09:32.453 '( )' can be omitted for single element group, 00:09:32.453 '@' can be omitted if cpus and lcores have the same value 00:09:32.453 --disable-cpumask-locks Disable CPU core lock files. 00:09:32.453 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:32.453 pollers in the app support interrupt mode) 00:09:32.453 -p, --main-core main (primary) core for DPDK 00:09:32.453 00:09:32.453 Configuration options: 00:09:32.453 -c, --config, --json JSON config file 00:09:32.453 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:32.453 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:32.453 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:32.453 --rpcs-allowed comma-separated list of permitted RPCS 00:09:32.453 --json-ignore-init-errors don't exit on invalid config entry 00:09:32.453 00:09:32.453 Memory options: 00:09:32.453 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:32.453 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:32.453 app_ut: unrecognized option '--test-long-opt' 00:09:32.453 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:32.453 -R, --huge-unlink unlink huge files after initialization 00:09:32.453 -n, --mem-channels number of memory channels used for DPDK 00:09:32.453 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:32.453 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:32.453 --no-huge run without using hugepages 00:09:32.453 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:32.453 -i, --shm-id shared memory ID (optional) 00:09:32.453 -g, --single-file-segments force creating just one hugetlbfs file 00:09:32.453 00:09:32.453 PCI options: 00:09:32.453 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:32.453 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:32.453 -u, --no-pci disable PCI access 00:09:32.453 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:32.453 00:09:32.453 Log options: 00:09:32.453 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:32.453 --silence-noticelog disable notice level logging to stderr 00:09:32.453 00:09:32.453 Trace options: 00:09:32.453 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:32.453 setting 0 to disable trace (default 32768) 00:09:32.453 Tracepoints vary in size and can use more than one trace entry. 00:09:32.453 -e, --tpoint-group [:] 00:09:32.453 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:32.453 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:32.453 a tracepoint group. First tpoint inside a group can be enabled by 00:09:32.453 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:32.453 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:32.453 in /include/spdk_internal/trace_defs.h 00:09:32.453 00:09:32.453 Other options: 00:09:32.453 -h, --help show this usage 00:09:32.453 -v, --version print SPDK version 00:09:32.453 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:32.453 --env-context Opaque context for use of the env implementation 00:09:32.453 [2024-11-20 07:19:36.307076] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1204:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:32.453 app_ut [options] 00:09:32.453 00:09:32.453 CPU options: 00:09:32.453 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:32.453 (like [0,1,10]) 00:09:32.453 --lcores lcore to CPU mapping list. The list is in the format: 00:09:32.453 [<,lcores[@CPUs]>...] 00:09:32.453 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:32.453 Within the group, '-' is used for range separator, 00:09:32.453 ',' is used for single number separator. 00:09:32.453 '( )' can be omitted for single element group, 00:09:32.453 '@' can be omitted if cpus and lcores have the same value 00:09:32.453 --disable-cpumask-locks Disable CPU core lock files. 00:09:32.453 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:32.453 pollers in the app support interrupt mode) 00:09:32.453 -p, --main-core main (primary) core for DPDK 00:09:32.453 00:09:32.453 Configuration options:[2024-11-20 07:19:36.307348] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1388:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:32.453 00:09:32.453 -c, --config, --json JSON config file 00:09:32.453 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:32.453 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:32.453 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:32.453 --rpcs-allowed comma-separated list of permitted RPCS 00:09:32.453 --json-ignore-init-errors don't exit on invalid config entry 00:09:32.453 00:09:32.453 Memory options: 00:09:32.453 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:32.453 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:32.453 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:32.453 -R, --huge-unlink unlink huge files after initialization 00:09:32.453 -n, --mem-channels number of memory channels used for DPDK 00:09:32.453 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:32.453 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:32.453 --no-huge run without using hugepages 00:09:32.453 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:32.453 -i, --shm-id shared memory ID (optional) 00:09:32.453 -g, --single-file-segments force creating just one hugetlbfs file 00:09:32.453 00:09:32.453 PCI options: 00:09:32.453 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:32.453 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:32.453 -u, --no-pci disable PCI access 00:09:32.453 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:32.453 00:09:32.453 Log options: 00:09:32.453 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:32.453 --silence-noticelog disable notice level logging to stderr 00:09:32.453 00:09:32.453 Trace options: 00:09:32.453 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:32.453 setting 0 to disable trace (default 32768) 00:09:32.453 Tracepoints vary in size and can use more than one trace entry. 00:09:32.453 -e, --tpoint-group [:] 00:09:32.453 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:32.453 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:32.453 a tracepoint group. First tpoint inside a group can be enabled by 00:09:32.453 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:32.453 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:32.453 in /include/spdk_internal/trace_defs.h 00:09:32.453 00:09:32.453 Other options: 00:09:32.453 -h, --help show this usage 00:09:32.453 -v, --version print SPDK version 00:09:32.453 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:32.453 --env-context Opaque context for use of the env implementation 00:09:32.453 passed 00:09:32.453 00:09:32.453 [2024-11-20 07:19:36.307508] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1290:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:32.453 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.453 suites 1 1 n/a 0 0 00:09:32.453 tests 1 1 1 0 0 00:09:32.454 asserts 8 8 8 0 n/a 00:09:32.454 00:09:32.454 Elapsed time = 0.001 seconds 00:09:32.454 07:19:36 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:32.454 00:09:32.454 00:09:32.454 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.454 http://cunit.sourceforge.net/ 00:09:32.454 00:09:32.454 00:09:32.454 Suite: app_suite 00:09:32.454 Test: test_create_reactor ...passed 00:09:32.454 Test: test_init_reactors ...passed 00:09:32.454 Test: test_event_call ...passed 00:09:32.454 Test: test_schedule_thread ...passed 00:09:32.454 Test: test_reschedule_thread ...passed 00:09:32.454 Test: test_bind_thread ...passed 00:09:32.454 Test: test_for_each_reactor ...passed 00:09:32.454 Test: test_reactor_stats ...passed 00:09:32.454 Test: test_scheduler ...passed 00:09:32.454 Test: test_governor ...passed 00:09:32.454 Test: test_scheduler_set_isolated_core_mask ...[2024-11-20 07:19:36.369872] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask. 00:09:32.454 [2024-11-20 07:19:36.370062] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask. 00:09:32.454 passed 00:09:32.454 Test: test_mixed_workload ...passed 00:09:32.454 00:09:32.454 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.454 suites 1 1 n/a 0 0 00:09:32.454 tests 12 12 12 0 0 00:09:32.454 asserts 344 344 344 0 n/a 00:09:32.454 00:09:32.454 Elapsed time = 0.024 seconds 00:09:32.712 00:09:32.712 real 0m0.122s 00:09:32.712 user 0m0.060s 00:09:32.712 sys 0m0.063s 00:09:32.712 07:19:36 unittest.unittest_event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.712 07:19:36 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:09:32.712 ************************************ 00:09:32.712 END TEST unittest_event 00:09:32.712 ************************************ 00:09:32.712 07:19:36 unittest -- unit/unittest.sh@217 -- # uname -s 00:09:32.712 07:19:36 unittest -- unit/unittest.sh@217 -- # '[' Linux = Linux ']' 00:09:32.712 07:19:36 unittest -- unit/unittest.sh@218 -- # run_test unittest_ftl unittest_ftl 00:09:32.712 07:19:36 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.712 07:19:36 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.712 07:19:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:32.712 ************************************ 00:09:32.712 START TEST unittest_ftl 00:09:32.712 ************************************ 00:09:32.712 07:19:36 unittest.unittest_ftl -- common/autotest_common.sh@1129 -- # unittest_ftl 00:09:32.712 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:32.712 00:09:32.712 00:09:32.712 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.712 http://cunit.sourceforge.net/ 00:09:32.712 00:09:32.712 00:09:32.712 Suite: ftl_band_suite 00:09:32.712 Test: test_band_block_offset_from_addr_base ...passed 00:09:32.713 Test: test_band_block_offset_from_addr_offset ...passed 00:09:32.713 Test: test_band_addr_from_block_offset ...passed 00:09:32.713 Test: test_band_set_addr ...passed 00:09:32.971 Test: test_invalidate_addr ...passed 00:09:32.971 Test: test_next_xfer_addr ...passed 00:09:32.971 00:09:32.971 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.971 suites 1 1 n/a 0 0 00:09:32.971 tests 6 6 6 0 0 00:09:32.971 asserts 30356 30356 30356 0 n/a 00:09:32.971 00:09:32.971 Elapsed time = 0.171 seconds 00:09:32.971 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:32.971 00:09:32.971 00:09:32.971 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.971 http://cunit.sourceforge.net/ 00:09:32.971 00:09:32.971 00:09:32.971 Suite: ftl_bitmap 00:09:32.971 Test: test_ftl_bitmap_create ...[2024-11-20 07:19:36.778728] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:32.971 passed 00:09:32.971 Test: test_ftl_bitmap_get ...[2024-11-20 07:19:36.779033] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:32.971 passed 00:09:32.971 Test: test_ftl_bitmap_set ...passed 00:09:32.971 Test: test_ftl_bitmap_clear ...passed 00:09:32.971 Test: test_ftl_bitmap_find_first_set ...passed 00:09:32.971 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:32.971 Test: test_ftl_bitmap_count_set ...passed 00:09:32.971 00:09:32.971 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.971 suites 1 1 n/a 0 0 00:09:32.971 tests 7 7 7 0 0 00:09:32.971 asserts 137 137 137 0 n/a 00:09:32.971 00:09:32.971 Elapsed time = 0.002 seconds 00:09:32.971 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:32.971 00:09:32.971 00:09:32.971 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.971 http://cunit.sourceforge.net/ 00:09:32.971 00:09:32.971 00:09:32.971 Suite: ftl_io_suite 00:09:32.971 Test: test_completion ...passed 00:09:32.971 Test: test_multiple_ios ...passed 00:09:32.971 00:09:32.971 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.971 suites 1 1 n/a 0 0 00:09:32.971 tests 2 2 2 0 0 00:09:32.971 asserts 47 47 47 0 n/a 00:09:32.971 00:09:32.971 Elapsed time = 0.004 seconds 00:09:32.971 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:32.971 00:09:32.971 00:09:32.971 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.971 http://cunit.sourceforge.net/ 00:09:32.971 00:09:32.971 00:09:32.971 Suite: ftl_mngt 00:09:32.971 Test: test_next_step ...passed 00:09:32.971 Test: test_continue_step ...passed 00:09:32.971 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:32.971 Test: test_fail_step ...passed 00:09:32.971 Test: test_mngt_call_and_call_rollback ...passed 00:09:32.971 Test: test_nested_process_failure ...passed 00:09:32.971 Test: test_call_init_success ...passed 00:09:32.971 Test: test_call_init_failure ...passed 00:09:32.971 00:09:32.971 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.971 suites 1 1 n/a 0 0 00:09:32.971 tests 8 8 8 0 0 00:09:32.971 asserts 196 196 196 0 n/a 00:09:32.971 00:09:32.971 Elapsed time = 0.003 seconds 00:09:33.230 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:33.230 00:09:33.230 00:09:33.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.230 http://cunit.sourceforge.net/ 00:09:33.230 00:09:33.230 00:09:33.230 Suite: ftl_mempool 00:09:33.230 Test: test_ftl_mempool_create ...passed 00:09:33.230 Test: test_ftl_mempool_get_put ...passed 00:09:33.230 00:09:33.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.230 suites 1 1 n/a 0 0 00:09:33.230 tests 2 2 2 0 0 00:09:33.230 asserts 36 36 36 0 n/a 00:09:33.230 00:09:33.230 Elapsed time = 0.000 seconds 00:09:33.230 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:33.230 00:09:33.230 00:09:33.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.230 http://cunit.sourceforge.net/ 00:09:33.230 00:09:33.230 00:09:33.230 Suite: ftl_addr64_suite 00:09:33.230 Test: test_addr_cached ...passed 00:09:33.230 00:09:33.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.230 suites 1 1 n/a 0 0 00:09:33.230 tests 1 1 1 0 0 00:09:33.230 asserts 1536 1536 1536 0 n/a 00:09:33.230 00:09:33.230 Elapsed time = 0.001 seconds 00:09:33.230 07:19:36 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:33.230 00:09:33.230 00:09:33.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.230 http://cunit.sourceforge.net/ 00:09:33.230 00:09:33.230 00:09:33.230 Suite: ftl_sb 00:09:33.230 Test: test_sb_crc_v2 ...passed 00:09:33.230 Test: test_sb_crc_v3 ...passed 00:09:33.230 Test: test_sb_v3_md_layout ...[2024-11-20 07:19:36.998637] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:33.230 [2024-11-20 07:19:36.998920] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:33.230 [2024-11-20 07:19:36.998973] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:33.230 [2024-11-20 07:19:36.999014] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:33.230 [2024-11-20 07:19:36.999056] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:33.230 [2024-11-20 07:19:36.999090] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:33.230 [2024-11-20 07:19:36.999131] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:33.230 [2024-11-20 07:19:36.999171] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:33.230 [2024-11-20 07:19:36.999272] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:33.230 [2024-11-20 07:19:36.999311] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:33.230 [2024-11-20 07:19:36.999366] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:33.230 passed 00:09:33.230 Test: test_sb_v5_md_layout ...passed 00:09:33.230 00:09:33.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.230 suites 1 1 n/a 0 0 00:09:33.230 tests 4 4 4 0 0 00:09:33.230 asserts 170 170 170 0 n/a 00:09:33.230 00:09:33.230 Elapsed time = 0.002 seconds 00:09:33.230 07:19:37 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:33.230 00:09:33.230 00:09:33.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.230 http://cunit.sourceforge.net/ 00:09:33.230 00:09:33.230 00:09:33.230 Suite: ftl_layout_upgrade 00:09:33.230 Test: test_l2p_upgrade ...passed 00:09:33.230 00:09:33.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.230 suites 1 1 n/a 0 0 00:09:33.230 tests 1 1 1 0 0 00:09:33.230 asserts 164 164 164 0 n/a 00:09:33.230 00:09:33.230 Elapsed time = 0.001 seconds 00:09:33.230 07:19:37 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:09:33.230 00:09:33.230 00:09:33.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.230 http://cunit.sourceforge.net/ 00:09:33.230 00:09:33.230 00:09:33.230 Suite: ftl_p2l_suite 00:09:33.230 Test: test_p2l_num_pages ...passed 00:09:33.230 Test: test_ckpt_issue ...passed 00:09:33.230 Test: test_persist_band_p2l ...passed 00:09:33.230 Test: test_clean_restore_p2l ...passed 00:09:33.230 Test: test_dirty_restore_p2l ...passed 00:09:33.230 00:09:33.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.230 suites 1 1 n/a 0 0 00:09:33.230 tests 5 5 5 0 0 00:09:33.230 asserts 10020 10020 10020 0 n/a 00:09:33.230 00:09:33.230 Elapsed time = 0.078 seconds 00:09:33.490 00:09:33.490 real 0m0.705s 00:09:33.490 user 0m0.332s 00:09:33.490 sys 0m0.375s 00:09:33.490 07:19:37 unittest.unittest_ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.490 07:19:37 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:09:33.490 ************************************ 00:09:33.490 END TEST unittest_ftl 00:09:33.490 ************************************ 00:09:33.490 07:19:37 unittest -- unit/unittest.sh@221 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:33.490 ************************************ 00:09:33.490 START TEST unittest_accel 00:09:33.490 ************************************ 00:09:33.490 07:19:37 unittest.unittest_accel -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:33.490 00:09:33.490 00:09:33.490 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.490 http://cunit.sourceforge.net/ 00:09:33.490 00:09:33.490 00:09:33.490 Suite: accel_sequence 00:09:33.490 Test: test_sequence_fill_copy ...passed 00:09:33.490 Test: test_sequence_abort ...passed 00:09:33.490 Test: test_sequence_append_error ...passed 00:09:33.490 Test: test_sequence_completion_error ...[2024-11-20 07:19:37.281368] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7467323247c0 00:09:33.490 [2024-11-20 07:19:37.281609] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7467323247c0 00:09:33.490 [2024-11-20 07:19:37.281648] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7467323247c0 00:09:33.490 [2024-11-20 07:19:37.281699] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7467323247c0 00:09:33.490 passed 00:09:33.490 Test: test_sequence_decompress ...passed 00:09:33.490 Test: test_sequence_reverse ...passed 00:09:33.490 Test: test_sequence_copy_elision ...passed 00:09:33.490 Test: test_sequence_accel_buffers ...passed 00:09:33.490 Test: test_sequence_memory_domain ...[2024-11-20 07:19:37.291357] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2187:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:33.490 [2024-11-20 07:19:37.291509] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2226:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:33.490 passed 00:09:33.490 Test: test_sequence_module_memory_domain ...passed 00:09:33.490 Test: test_sequence_crypto ...passed 00:09:33.490 Test: test_sequence_driver ...[2024-11-20 07:19:37.297323] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2334:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x74672eb2f7c0 using driver: ut 00:09:33.490 [2024-11-20 07:19:37.297415] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2395:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x74672eb2f7c0 through driver: ut 00:09:33.490 passed 00:09:33.490 Test: test_sequence_same_iovs ...passed 00:09:33.490 Test: test_sequence_crc32 ...passed 00:09:33.490 Test: test_sequence_dix_generate_verify ...passed 00:09:33.490 Test: test_sequence_dix ...passed 00:09:33.490 Suite: accel 00:09:33.490 Test: test_spdk_accel_task_complete ...passed 00:09:33.490 Test: test_get_task ...passed 00:09:33.490 Test: test_spdk_accel_submit_copy ...passed 00:09:33.490 Test: test_spdk_accel_submit_dualcast ...[2024-11-20 07:19:37.305231] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:33.490 [2024-11-20 07:19:37.305291] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:33.490 passed 00:09:33.490 Test: test_spdk_accel_submit_compare ...passed 00:09:33.490 Test: test_spdk_accel_submit_fill ...passed 00:09:33.490 Test: test_spdk_accel_submit_crc32c ...passed 00:09:33.490 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:33.490 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:33.490 Test: test_spdk_accel_submit_xor ...passed 00:09:33.490 Test: test_spdk_accel_module_find_by_name ...passed 00:09:33.490 Test: test_spdk_accel_module_register ...passed 00:09:33.490 00:09:33.490 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.490 suites 2 2 n/a 0 0 00:09:33.490 tests 28 28 28 0 0 00:09:33.490 asserts 884 884 884 0 n/a 00:09:33.490 00:09:33.490 Elapsed time = 0.035 seconds 00:09:33.490 00:09:33.490 real 0m0.094s 00:09:33.490 user 0m0.043s 00:09:33.490 sys 0m0.051s 00:09:33.490 07:19:37 unittest.unittest_accel -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.490 07:19:37 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:09:33.490 ************************************ 00:09:33.490 END TEST unittest_accel 00:09:33.490 ************************************ 00:09:33.490 07:19:37 unittest -- unit/unittest.sh@222 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.490 07:19:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:33.490 ************************************ 00:09:33.490 START TEST unittest_ioat 00:09:33.490 ************************************ 00:09:33.490 07:19:37 unittest.unittest_ioat -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:33.749 00:09:33.749 00:09:33.749 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.749 http://cunit.sourceforge.net/ 00:09:33.749 00:09:33.749 00:09:33.749 Suite: ioat 00:09:33.749 Test: ioat_state_check ...passed 00:09:33.749 00:09:33.749 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.749 suites 1 1 n/a 0 0 00:09:33.749 tests 1 1 1 0 0 00:09:33.749 asserts 32 32 32 0 n/a 00:09:33.749 00:09:33.749 Elapsed time = 0.000 seconds 00:09:33.749 00:09:33.749 real 0m0.043s 00:09:33.749 user 0m0.024s 00:09:33.749 sys 0m0.019s 00:09:33.749 07:19:37 unittest.unittest_ioat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.749 07:19:37 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 ************************************ 00:09:33.749 END TEST unittest_ioat 00:09:33.749 ************************************ 00:09:33.749 07:19:37 unittest -- unit/unittest.sh@223 -- # [[ y == y ]] 00:09:33.749 07:19:37 unittest -- unit/unittest.sh@224 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:33.749 07:19:37 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.749 07:19:37 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.749 07:19:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 ************************************ 00:09:33.749 START TEST unittest_idxd_user 00:09:33.749 ************************************ 00:09:33.749 07:19:37 unittest.unittest_idxd_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:33.749 00:09:33.749 00:09:33.749 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.749 http://cunit.sourceforge.net/ 00:09:33.749 00:09:33.749 00:09:33.749 Suite: idxd_user 00:09:33.749 Test: test_idxd_wait_cmd ...[2024-11-20 07:19:37.533033] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:33.749 passed 00:09:33.749 Test: test_idxd_reset_dev ...passed 00:09:33.749 Test: test_idxd_group_config ...passed 00:09:33.749 Test: test_idxd_wq_config ...passed 00:09:33.749 00:09:33.749 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.749 suites 1 1 n/a 0 0 00:09:33.749 tests 4 4 4 0 0 00:09:33.749 asserts 20 20 20 0 n/a 00:09:33.749 00:09:33.749 Elapsed time = 0.001 seconds 00:09:33.749 [2024-11-20 07:19:37.533189] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:33.749 [2024-11-20 07:19:37.533301] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:33.749 [2024-11-20 07:19:37.533340] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:33.749 00:09:33.749 real 0m0.046s 00:09:33.749 user 0m0.017s 00:09:33.749 sys 0m0.029s 00:09:33.749 07:19:37 unittest.unittest_idxd_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.749 07:19:37 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 ************************************ 00:09:33.749 END TEST unittest_idxd_user 00:09:33.749 ************************************ 00:09:33.749 07:19:37 unittest -- unit/unittest.sh@226 -- # run_test unittest_iscsi unittest_iscsi 00:09:33.749 07:19:37 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.749 07:19:37 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.750 07:19:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:33.750 ************************************ 00:09:33.750 START TEST unittest_iscsi 00:09:33.750 ************************************ 00:09:33.750 07:19:37 unittest.unittest_iscsi -- common/autotest_common.sh@1129 -- # unittest_iscsi 00:09:33.750 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:33.750 00:09:33.750 00:09:33.750 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.750 http://cunit.sourceforge.net/ 00:09:33.750 00:09:33.750 00:09:33.750 Suite: conn_suite 00:09:33.750 Test: read_task_split_in_order_case ...passed 00:09:33.750 Test: read_task_split_reverse_order_case ...passed 00:09:33.750 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:33.750 Test: process_non_read_task_completion_test ...passed 00:09:33.750 Test: free_tasks_on_connection ...passed 00:09:33.750 Test: free_tasks_with_queued_datain ...passed 00:09:33.750 Test: abort_queued_datain_task_test ...passed 00:09:33.750 Test: abort_queued_datain_tasks_test ...passed 00:09:33.750 00:09:33.750 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.750 suites 1 1 n/a 0 0 00:09:33.750 tests 8 8 8 0 0 00:09:33.750 asserts 230 230 230 0 n/a 00:09:33.750 00:09:33.750 Elapsed time = 0.001 seconds 00:09:34.009 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:34.009 00:09:34.010 00:09:34.010 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.010 http://cunit.sourceforge.net/ 00:09:34.010 00:09:34.010 00:09:34.010 Suite: iscsi_suite 00:09:34.010 Test: param_negotiation_test ...passed 00:09:34.010 Test: list_negotiation_test ...passed 00:09:34.010 Test: parse_valid_test ...passed 00:09:34.010 Test: parse_invalid_test ...[2024-11-20 07:19:37.702927] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:34.010 [2024-11-20 07:19:37.703188] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:34.010 passed 00:09:34.010 00:09:34.010 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.010 suites 1 1 n/a 0 0 00:09:34.010 tests 4 4 4 0 0 00:09:34.010 asserts 161 161 161 0 n/a 00:09:34.010 00:09:34.010 Elapsed time = 0.006 seconds 00:09:34.010 [2024-11-20 07:19:37.703234] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:09:34.010 [2024-11-20 07:19:37.703294] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:34.010 [2024-11-20 07:19:37.703413] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:34.010 [2024-11-20 07:19:37.703446] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:34.010 [2024-11-20 07:19:37.703526] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:34.010 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:34.010 00:09:34.010 00:09:34.010 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.010 http://cunit.sourceforge.net/ 00:09:34.010 00:09:34.010 00:09:34.010 Suite: iscsi_target_node_suite 00:09:34.010 Test: add_lun_test_cases ...[2024-11-20 07:19:37.746377] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:34.010 passed 00:09:34.010 Test: allow_any_allowed ...passed 00:09:34.010 Test: allow_ipv6_allowed ...passed 00:09:34.010 Test: allow_ipv6_denied ...passed 00:09:34.010 Test: allow_ipv6_invalid ...passed 00:09:34.010 Test: allow_ipv4_allowed ...passed 00:09:34.010 Test: allow_ipv4_denied ...passed 00:09:34.010 Test: allow_ipv4_invalid ...passed 00:09:34.010 Test: node_access_allowed ...[2024-11-20 07:19:37.746606] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:34.010 [2024-11-20 07:19:37.746651] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:34.010 [2024-11-20 07:19:37.746710] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:34.010 [2024-11-20 07:19:37.746748] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:34.010 passed 00:09:34.010 Test: node_access_denied_by_empty_netmask ...passed 00:09:34.010 Test: node_access_multi_initiator_groups_cases ...passed 00:09:34.010 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:34.010 Test: chap_param_test_cases ...[2024-11-20 07:19:37.747348] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:34.010 [2024-11-20 07:19:37.747399] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:34.010 [2024-11-20 07:19:37.747436] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:34.010 [2024-11-20 07:19:37.747470] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:34.010 passed 00:09:34.010 00:09:34.010 [2024-11-20 07:19:37.747499] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:34.010 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.010 suites 1 1 n/a 0 0 00:09:34.010 tests 13 13 13 0 0 00:09:34.010 asserts 50 50 50 0 n/a 00:09:34.010 00:09:34.010 Elapsed time = 0.001 seconds 00:09:34.010 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:34.010 00:09:34.010 00:09:34.010 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.010 http://cunit.sourceforge.net/ 00:09:34.010 00:09:34.010 00:09:34.010 Suite: iscsi_suite 00:09:34.010 Test: op_login_check_target_test ...passed 00:09:34.010 Test: op_login_session_normal_test ...[2024-11-20 07:19:37.780110] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:09:34.010 [2024-11-20 07:19:37.780399] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.010 [2024-11-20 07:19:37.780440] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.010 [2024-11-20 07:19:37.780469] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.010 [2024-11-20 07:19:37.780519] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:34.010 [2024-11-20 07:19:37.780556] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:34.010 [2024-11-20 07:19:37.780608] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:34.010 [2024-11-20 07:19:37.780641] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:34.010 passed 00:09:34.010 Test: maxburstlength_test ...[2024-11-20 07:19:37.780947] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:34.010 [2024-11-20 07:19:37.780988] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:34.010 passed 00:09:34.010 Test: underflow_for_read_transfer_test ...passed 00:09:34.010 Test: underflow_for_zero_read_transfer_test ...passed 00:09:34.010 Test: underflow_for_request_sense_test ...passed 00:09:34.010 Test: underflow_for_check_condition_test ...passed 00:09:34.010 Test: add_transfer_task_test ...passed 00:09:34.010 Test: get_transfer_task_test ...passed 00:09:34.010 Test: del_transfer_task_test ...passed 00:09:34.010 Test: clear_all_transfer_tasks_test ...passed 00:09:34.010 Test: build_iovs_test ...passed 00:09:34.010 Test: build_iovs_with_md_test ...passed 00:09:34.010 Test: pdu_hdr_op_login_test ...[2024-11-20 07:19:37.782800] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:34.011 [2024-11-20 07:19:37.782943] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:34.011 [2024-11-20 07:19:37.783048] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_text_test ...[2024-11-20 07:19:37.783144] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:34.011 [2024-11-20 07:19:37.783195] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:34.011 [2024-11-20 07:19:37.783222] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_logout_test ...[2024-11-20 07:19:37.783285] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_scsi_test ...[2024-11-20 07:19:37.783371] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:34.011 [2024-11-20 07:19:37.783401] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:34.011 [2024-11-20 07:19:37.783419] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:34.011 [2024-11-20 07:19:37.783489] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:34.011 [2024-11-20 07:19:37.783535] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:34.011 [2024-11-20 07:19:37.783663] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-20 07:19:37.783765] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:34.011 [2024-11-20 07:19:37.783829] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_nopout_test ...[2024-11-20 07:19:37.784007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:34.011 [2024-11-20 07:19:37.784070] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:34.011 [2024-11-20 07:19:37.784098] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:34.011 passed 00:09:34.011 Test: pdu_hdr_op_data_test ...[2024-11-20 07:19:37.784122] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:34.011 [2024-11-20 07:19:37.784176] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:34.011 [2024-11-20 07:19:37.784217] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:34.011 [2024-11-20 07:19:37.784261] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:34.011 [2024-11-20 07:19:37.784282] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:34.011 [2024-11-20 07:19:37.784322] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:34.011 [2024-11-20 07:19:37.784365] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:34.011 [2024-11-20 07:19:37.784395] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:34.011 passed 00:09:34.011 Test: empty_text_with_cbit_test ...passed 00:09:34.011 Test: pdu_payload_read_test ...[2024-11-20 07:19:37.786059] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:34.011 passed 00:09:34.011 Test: data_out_pdu_sequence_test ...passed 00:09:34.011 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:34.011 00:09:34.011 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.011 suites 1 1 n/a 0 0 00:09:34.011 tests 24 24 24 0 0 00:09:34.011 asserts 150253 150253 150253 0 n/a 00:09:34.011 00:09:34.011 Elapsed time = 0.014 seconds 00:09:34.011 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:34.011 00:09:34.011 00:09:34.011 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.011 http://cunit.sourceforge.net/ 00:09:34.011 00:09:34.011 00:09:34.011 Suite: init_grp_suite 00:09:34.011 Test: create_initiator_group_success_case ...passed 00:09:34.011 Test: find_initiator_group_success_case ...passed 00:09:34.011 Test: register_initiator_group_twice_case ...passed 00:09:34.011 Test: add_initiator_name_success_case ...passed 00:09:34.011 Test: add_initiator_name_fail_case ...[2024-11-20 07:19:37.835558] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:34.011 passed 00:09:34.011 Test: delete_all_initiator_names_success_case ...passed 00:09:34.011 Test: add_netmask_success_case ...passed 00:09:34.011 Test: add_netmask_fail_case ...[2024-11-20 07:19:37.835942] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:34.011 passed 00:09:34.011 Test: delete_all_netmasks_success_case ...passed 00:09:34.011 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:34.011 Test: netmask_overwrite_all_to_any_case ...passed 00:09:34.011 Test: add_delete_initiator_names_case ...passed 00:09:34.011 Test: add_duplicated_initiator_names_case ...passed 00:09:34.011 Test: delete_nonexisting_initiator_names_case ...passed 00:09:34.011 Test: add_delete_netmasks_case ...passed 00:09:34.011 Test: add_duplicated_netmasks_case ...passed 00:09:34.011 Test: delete_nonexisting_netmasks_case ...passed 00:09:34.011 00:09:34.011 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.011 suites 1 1 n/a 0 0 00:09:34.011 tests 17 17 17 0 0 00:09:34.011 asserts 108 108 108 0 n/a 00:09:34.011 00:09:34.011 Elapsed time = 0.001 seconds 00:09:34.011 07:19:37 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:34.011 00:09:34.011 00:09:34.011 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.011 http://cunit.sourceforge.net/ 00:09:34.011 00:09:34.011 00:09:34.011 Suite: portal_grp_suite 00:09:34.011 Test: portal_create_ipv4_normal_case ...passed 00:09:34.011 Test: portal_create_ipv6_normal_case ...passed 00:09:34.011 Test: portal_create_ipv4_wildcard_case ...passed 00:09:34.011 Test: portal_create_ipv6_wildcard_case ...passed 00:09:34.011 Test: portal_create_twice_case ...[2024-11-20 07:19:37.875947] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:34.011 passed 00:09:34.011 Test: portal_grp_register_unregister_case ...passed 00:09:34.011 Test: portal_grp_register_twice_case ...passed 00:09:34.011 Test: portal_grp_add_delete_case ...passed 00:09:34.011 Test: portal_grp_add_delete_twice_case ...passed 00:09:34.011 00:09:34.011 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.011 suites 1 1 n/a 0 0 00:09:34.011 tests 9 9 9 0 0 00:09:34.011 asserts 44 44 44 0 n/a 00:09:34.011 00:09:34.011 Elapsed time = 0.005 seconds 00:09:34.011 00:09:34.011 real 0m0.284s 00:09:34.011 user 0m0.138s 00:09:34.011 sys 0m0.148s 00:09:34.011 07:19:37 unittest.unittest_iscsi -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.011 07:19:37 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:09:34.011 ************************************ 00:09:34.011 END TEST unittest_iscsi 00:09:34.012 ************************************ 00:09:34.272 07:19:37 unittest -- unit/unittest.sh@227 -- # run_test unittest_json unittest_json 00:09:34.272 07:19:37 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.272 07:19:37 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.272 07:19:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.272 ************************************ 00:09:34.272 START TEST unittest_json 00:09:34.272 ************************************ 00:09:34.272 07:19:37 unittest.unittest_json -- common/autotest_common.sh@1129 -- # unittest_json 00:09:34.272 07:19:37 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:34.272 00:09:34.272 00:09:34.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.272 http://cunit.sourceforge.net/ 00:09:34.272 00:09:34.272 00:09:34.272 Suite: json 00:09:34.272 Test: test_parse_literal ...passed 00:09:34.272 Test: test_parse_string_simple ...passed 00:09:34.272 Test: test_parse_string_control_chars ...passed 00:09:34.272 Test: test_parse_string_utf8 ...passed 00:09:34.272 Test: test_parse_string_escapes_twochar ...passed 00:09:34.272 Test: test_parse_string_escapes_unicode ...passed 00:09:34.272 Test: test_parse_number ...passed 00:09:34.272 Test: test_parse_array ...passed 00:09:34.272 Test: test_parse_object ...passed 00:09:34.272 Test: test_parse_nesting ...passed 00:09:34.272 Test: test_parse_comment ...passed 00:09:34.272 00:09:34.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.272 suites 1 1 n/a 0 0 00:09:34.272 tests 11 11 11 0 0 00:09:34.272 asserts 1516 1516 1516 0 n/a 00:09:34.272 00:09:34.272 Elapsed time = 0.002 seconds 00:09:34.272 07:19:37 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:34.272 00:09:34.272 00:09:34.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.272 http://cunit.sourceforge.net/ 00:09:34.272 00:09:34.272 00:09:34.272 Suite: json 00:09:34.272 Test: test_strequal ...passed 00:09:34.272 Test: test_num_to_uint16 ...passed 00:09:34.272 Test: test_num_to_int32 ...passed 00:09:34.272 Test: test_num_to_uint64 ...passed 00:09:34.272 Test: test_decode_object ...passed 00:09:34.272 Test: test_decode_array ...passed 00:09:34.272 Test: test_decode_bool ...passed 00:09:34.272 Test: test_decode_uint16 ...passed 00:09:34.272 Test: test_decode_int32 ...passed 00:09:34.272 Test: test_decode_uint32 ...passed 00:09:34.272 Test: test_decode_uint64 ...passed 00:09:34.272 Test: test_decode_string ...passed 00:09:34.272 Test: test_decode_uuid ...passed 00:09:34.272 Test: test_find ...passed 00:09:34.272 Test: test_find_array ...passed 00:09:34.272 Test: test_iterating ...passed 00:09:34.272 Test: test_free_object ...passed 00:09:34.272 00:09:34.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.272 suites 1 1 n/a 0 0 00:09:34.272 tests 17 17 17 0 0 00:09:34.272 asserts 236 236 236 0 n/a 00:09:34.272 00:09:34.272 Elapsed time = 0.001 seconds 00:09:34.272 07:19:38 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:34.272 00:09:34.272 00:09:34.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.272 http://cunit.sourceforge.net/ 00:09:34.272 00:09:34.272 00:09:34.272 Suite: json 00:09:34.272 Test: test_write_literal ...passed 00:09:34.272 Test: test_write_string_simple ...passed 00:09:34.272 Test: test_write_string_escapes ...passed 00:09:34.272 Test: test_write_string_utf16le ...passed 00:09:34.272 Test: test_write_number_int32 ...passed 00:09:34.272 Test: test_write_number_uint32 ...passed 00:09:34.272 Test: test_write_number_uint128 ...passed 00:09:34.272 Test: test_write_string_number_uint128 ...passed 00:09:34.272 Test: test_write_number_int64 ...passed 00:09:34.272 Test: test_write_number_uint64 ...passed 00:09:34.272 Test: test_write_number_double ...passed 00:09:34.272 Test: test_write_uuid ...passed 00:09:34.272 Test: test_write_array ...passed 00:09:34.272 Test: test_write_object ...passed 00:09:34.272 Test: test_write_nesting ...passed 00:09:34.272 Test: test_write_val ...passed 00:09:34.272 00:09:34.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.272 suites 1 1 n/a 0 0 00:09:34.272 tests 16 16 16 0 0 00:09:34.272 asserts 918 918 918 0 n/a 00:09:34.272 00:09:34.272 Elapsed time = 0.005 seconds 00:09:34.272 07:19:38 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:34.272 00:09:34.272 00:09:34.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.272 http://cunit.sourceforge.net/ 00:09:34.272 00:09:34.272 00:09:34.272 Suite: jsonrpc 00:09:34.272 Test: test_parse_request ...passed 00:09:34.272 Test: test_parse_request_streaming ...passed 00:09:34.272 00:09:34.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.272 suites 1 1 n/a 0 0 00:09:34.272 tests 2 2 2 0 0 00:09:34.272 asserts 289 289 289 0 n/a 00:09:34.272 00:09:34.272 Elapsed time = 0.005 seconds 00:09:34.272 00:09:34.272 real 0m0.132s 00:09:34.272 user 0m0.061s 00:09:34.272 sys 0m0.071s 00:09:34.272 ************************************ 00:09:34.272 END TEST unittest_json 00:09:34.272 ************************************ 00:09:34.272 07:19:38 unittest.unittest_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.272 07:19:38 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.272 07:19:38 unittest -- unit/unittest.sh@228 -- # run_test unittest_rpc unittest_rpc 00:09:34.272 07:19:38 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.272 07:19:38 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.272 07:19:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.272 ************************************ 00:09:34.272 START TEST unittest_rpc 00:09:34.272 ************************************ 00:09:34.272 07:19:38 unittest.unittest_rpc -- common/autotest_common.sh@1129 -- # unittest_rpc 00:09:34.272 07:19:38 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:34.272 00:09:34.272 00:09:34.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.272 http://cunit.sourceforge.net/ 00:09:34.272 00:09:34.272 00:09:34.272 Suite: rpc 00:09:34.272 Test: test_jsonrpc_handler ...passed 00:09:34.272 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:34.272 Test: test_rpc_get_methods ...passed 00:09:34.272 Test: test_rpc_spdk_get_version ...passed 00:09:34.272 Test: test_spdk_rpc_listen_close ...[2024-11-20 07:19:38.186480] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:34.272 passed 00:09:34.272 Test: test_rpc_run_multiple_servers ...passed 00:09:34.272 00:09:34.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.272 suites 1 1 n/a 0 0 00:09:34.272 tests 6 6 6 0 0 00:09:34.272 asserts 23 23 23 0 n/a 00:09:34.272 00:09:34.272 Elapsed time = 0.001 seconds 00:09:34.533 00:09:34.533 real 0m0.052s 00:09:34.533 user 0m0.032s 00:09:34.533 sys 0m0.021s 00:09:34.533 07:19:38 unittest.unittest_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.533 07:19:38 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 ************************************ 00:09:34.533 END TEST unittest_rpc 00:09:34.533 ************************************ 00:09:34.533 07:19:38 unittest -- unit/unittest.sh@229 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 ************************************ 00:09:34.533 START TEST unittest_notify 00:09:34.533 ************************************ 00:09:34.533 07:19:38 unittest.unittest_notify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:34.533 00:09:34.533 00:09:34.533 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.533 http://cunit.sourceforge.net/ 00:09:34.533 00:09:34.533 00:09:34.533 Suite: app_suite 00:09:34.533 Test: notify ...passed 00:09:34.533 00:09:34.533 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.533 suites 1 1 n/a 0 0 00:09:34.533 tests 1 1 1 0 0 00:09:34.533 asserts 13 13 13 0 n/a 00:09:34.533 00:09:34.533 Elapsed time = 0.000 seconds 00:09:34.533 00:09:34.533 real 0m0.046s 00:09:34.533 user 0m0.023s 00:09:34.533 sys 0m0.023s 00:09:34.533 07:19:38 unittest.unittest_notify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.533 07:19:38 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 ************************************ 00:09:34.533 END TEST unittest_notify 00:09:34.533 ************************************ 00:09:34.533 07:19:38 unittest -- unit/unittest.sh@230 -- # run_test unittest_nvme unittest_nvme 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.533 07:19:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 ************************************ 00:09:34.533 START TEST unittest_nvme 00:09:34.533 ************************************ 00:09:34.533 07:19:38 unittest.unittest_nvme -- common/autotest_common.sh@1129 -- # unittest_nvme 00:09:34.533 07:19:38 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:34.533 00:09:34.533 00:09:34.533 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.533 http://cunit.sourceforge.net/ 00:09:34.533 00:09:34.533 00:09:34.533 Suite: nvme 00:09:34.533 Test: test_opc_data_transfer ...passed 00:09:34.533 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:34.533 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:34.533 Test: test_trid_parse_and_compare ...[2024-11-20 07:19:38.418868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1225:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:34.533 [2024-11-20 07:19:38.419182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.533 [2024-11-20 07:19:38.419249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1237:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:34.533 [2024-11-20 07:19:38.419297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.533 [2024-11-20 07:19:38.419351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1248:parse_next_key: *ERROR*: Key without value 00:09:34.533 [2024-11-20 07:19:38.419405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.533 passed 00:09:34.533 Test: test_trid_trtype_str ...passed 00:09:34.533 Test: test_trid_adrfam_str ...passed 00:09:34.533 Test: test_nvme_ctrlr_probe ...[2024-11-20 07:19:38.419828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 662:nvme_ctrlr_probe: *ERROR*: NVMe controller for SSD: is being destructed 00:09:34.533 [2024-11-20 07:19:38.419899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:34.533 passed 00:09:34.533 Test: test_spdk_nvme_probe_ext ...[2024-11-20 07:19:38.419996] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.533 [2024-11-20 07:19:38.420040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:09:34.533 [2024-11-20 07:19:38.420212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:34.533 [2024-11-20 07:19:38.420283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:09:34.533 passed 00:09:34.533 Test: test_spdk_nvme_connect ...[2024-11-20 07:19:38.420421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1036:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:34.533 passed 00:09:34.533 Test: test_nvme_ctrlr_probe_internal ...[2024-11-20 07:19:38.421050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.533 [2024-11-20 07:19:38.421278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:34.533 [2024-11-20 07:19:38.421342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:34.533 passed 00:09:34.533 Test: test_nvme_init_controllers ...[2024-11-20 07:19:38.421476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:34.533 passed 00:09:34.533 Test: test_nvme_driver_init ...[2024-11-20 07:19:38.421640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 576:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:34.533 [2024-11-20 07:19:38.421719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.793 [2024-11-20 07:19:38.530867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 594:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:34.793 [2024-11-20 07:19:38.531106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 616:nvme_driver_init: *ERROR*: failed to initialize mpassed 00:09:34.793 Test: test_spdk_nvme_detach ...utex 00:09:34.793 passed 00:09:34.793 Test: test_nvme_completion_poll_cb ...passed 00:09:34.793 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:34.793 Test: test_nvme_allocate_request_null ...passed 00:09:34.793 Test: test_nvme_allocate_request ...passed 00:09:34.793 Test: test_nvme_free_request ...passed 00:09:34.793 Test: test_nvme_allocate_request_user_copy ...passed 00:09:34.793 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:34.793 Test: test_nvme_request_check_timeout ...passed 00:09:34.793 Test: test_nvme_wait_for_completion ...passed 00:09:34.793 Test: test_spdk_nvme_parse_func ...passed 00:09:34.793 Test: test_spdk_nvme_detach_async ...passed 00:09:34.793 Test: test_nvme_parse_addr ...[2024-11-20 07:19:38.533208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1682:nvme_parse_addr: *ERROR*: getaddrinfo failed: Name or service not known (-2) 00:09:34.793 passed 00:09:34.793 00:09:34.793 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.793 suites 1 1 n/a 0 0 00:09:34.793 tests 25 25 25 0 0 00:09:34.793 asserts 331 331 331 0 n/a 00:09:34.793 00:09:34.793 Elapsed time = 0.008 seconds 00:09:34.793 07:19:38 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:34.793 00:09:34.793 00:09:34.793 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.793 http://cunit.sourceforge.net/ 00:09:34.793 00:09:34.793 00:09:34.793 Suite: nvme_ctrlr 00:09:34.793 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-20 07:19:38.588126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-20 07:19:38.589910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-20 07:19:38.591207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-20 07:19:38.592462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-20 07:19:38.593732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 [2024-11-20 07:19:38.594877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 07:19:38.596028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 07:19:38.597166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-20 07:19:38.599518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 [2024-11-20 07:19:38.601719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 07:19:38.602869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed 00:09:34.793 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-20 07:19:38.605241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 [2024-11-20 07:19:38.606444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 07:19:38.608708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed 00:09:34.793 Test: test_nvme_ctrlr_init_delay ...[2024-11-20 07:19:38.611208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_alloc_io_qpair_rr_1 ...[2024-11-20 07:19:38.612561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 [2024-11-20 07:19:38.612841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs 00:09:34.793 [2024-11-20 07:19:38.612980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method 00:09:34.793 [2024-11-20 07:19:38.613057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method 00:09:34.793 [2024-11-20 07:19:38.613081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method 00:09:34.793 passed 00:09:34.793 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:09:34.793 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:34.793 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-20 07:19:38.613249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-20 07:19:38.613527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 [2024-11-20 07:19:38.613739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs 00:09:34.793 passed 00:09:34.793 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-20 07:19:38.614074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5051:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:34.793 [2024-11-20 07:19:38.614204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed! 00:09:34.793 [2024-11-20 07:19:38.614272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5128:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] nvme_ctrlr_cmd_fw_commit failed! 00:09:34.793 [2024-11-20 07:19:38.614390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed! 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_fail ...passed 00:09:34.793 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-11-20 07:19:38.614503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [, 0] in failed state. 00:09:34.793 passed 00:09:34.793 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:34.793 Test: test_nvme_ctrlr_set_host_feature ...[2024-11-20 07:19:38.614653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.793 passed 00:09:34.793 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:34.793 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-20 07:19:38.616307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.053 passed 00:09:35.053 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:35.054 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:35.054 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:35.054 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-20 07:19:38.855047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-20 07:19:38.861900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-20 07:19:38.863059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 [2024-11-20 07:19:38.863107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3039:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [, 0] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:35.054 passed 00:09:35.054 Test: test_alloc_io_qpair_fail ...[2024-11-20 07:19:38.864217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 [2024-11-20 07:19:38.864281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 505:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [, 0] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:35.054 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:35.054 Test: test_nvme_ctrlr_set_state ...[2024-11-20 07:19:38.864453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1554:_nvme_ctrlr_set_state: *ERROR*: [, 0] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-20 07:19:38.864498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-20 07:19:38.885744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-20 07:19:38.923300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_reset ...[2024-11-20 07:19:38.924896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_aer_callback ...[2024-11-20 07:19:38.925207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-20 07:19:38.926546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:35.054 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:35.054 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-20 07:19:38.928157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:35.054 Test: test_nvme_ctrlr_ana_resize ...[2024-11-20 07:19:38.929483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:35.054 Test: test_nvme_transport_ctrlr_ready ...passed 00:09:35.054 Test: test_nvme_ctrlr_disable ...[2024-11-20 07:19:38.930979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4194:nvme_ctrlr_process_init: *ERROR*: [, 0] Transport controller ready step failed: rc -1 00:09:35.054 [2024-11-20 07:19:38.931017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4246:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:09:35.054 [2024-11-20 07:19:38.931053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:35.054 passed 00:09:35.054 Test: test_nvme_numa_id ...passed 00:09:35.054 00:09:35.054 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.054 suites 1 1 n/a 0 0 00:09:35.054 tests 45 45 45 0 0 00:09:35.054 asserts 10448 10448 10448 0 n/a 00:09:35.054 00:09:35.054 Elapsed time = 0.303 seconds 00:09:35.054 07:19:38 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:35.314 00:09:35.314 00:09:35.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.314 http://cunit.sourceforge.net/ 00:09:35.314 00:09:35.314 00:09:35.314 Suite: nvme_ctrlr_cmd 00:09:35.314 Test: test_get_log_pages ...passed 00:09:35.314 Test: test_set_feature_cmd ...passed 00:09:35.314 Test: test_set_feature_ns_cmd ...passed 00:09:35.314 Test: test_get_feature_cmd ...passed 00:09:35.314 Test: test_get_feature_ns_cmd ...passed 00:09:35.314 Test: test_abort_cmd ...passed 00:09:35.314 Test: test_set_host_id_cmds ...[2024-11-20 07:19:38.986497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:35.314 passed 00:09:35.314 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:35.314 Test: test_io_raw_cmd ...passed 00:09:35.314 Test: test_io_raw_cmd_with_md ...passed 00:09:35.314 Test: test_namespace_attach ...passed 00:09:35.314 Test: test_namespace_detach ...passed 00:09:35.314 Test: test_namespace_create ...passed 00:09:35.314 Test: test_namespace_delete ...passed 00:09:35.314 Test: test_doorbell_buffer_config ...passed 00:09:35.314 Test: test_format_nvme ...passed 00:09:35.314 Test: test_fw_commit ...passed 00:09:35.314 Test: test_fw_image_download ...passed 00:09:35.314 Test: test_sanitize ...passed 00:09:35.314 Test: test_directive ...passed 00:09:35.314 Test: test_nvme_request_add_abort ...passed 00:09:35.314 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:35.314 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:35.314 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:35.314 00:09:35.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.314 suites 1 1 n/a 0 0 00:09:35.314 tests 24 24 24 0 0 00:09:35.314 asserts 198 198 198 0 n/a 00:09:35.314 00:09:35.314 Elapsed time = 0.001 seconds 00:09:35.314 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:35.314 00:09:35.314 00:09:35.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.314 http://cunit.sourceforge.net/ 00:09:35.314 00:09:35.314 00:09:35.314 Suite: nvme_ctrlr_cmd 00:09:35.314 Test: test_geometry_cmd ...passed 00:09:35.314 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:35.314 00:09:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.315 suites 1 1 n/a 0 0 00:09:35.315 tests 2 2 2 0 0 00:09:35.315 asserts 7 7 7 0 n/a 00:09:35.315 00:09:35.315 Elapsed time = 0.000 seconds 00:09:35.315 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:35.315 00:09:35.315 00:09:35.315 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.315 http://cunit.sourceforge.net/ 00:09:35.315 00:09:35.315 00:09:35.315 Suite: nvme 00:09:35.315 Test: test_nvme_ns_construct ...passed 00:09:35.315 Test: test_nvme_ns_uuid ...passed 00:09:35.315 Test: test_nvme_ns_csi ...passed 00:09:35.315 Test: test_nvme_ns_data ...passed 00:09:35.315 Test: test_nvme_ns_set_identify_data ...passed 00:09:35.315 Test: test_spdk_nvme_ns_get_values ...passed 00:09:35.315 Test: test_spdk_nvme_ns_is_active ...passed 00:09:35.315 Test: spdk_nvme_ns_supports ...passed 00:09:35.315 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:35.315 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:35.315 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:35.315 Test: test_nvme_ns_find_id_desc ...passed 00:09:35.315 00:09:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.315 suites 1 1 n/a 0 0 00:09:35.315 tests 12 12 12 0 0 00:09:35.315 asserts 95 95 95 0 n/a 00:09:35.315 00:09:35.315 Elapsed time = 0.001 seconds 00:09:35.315 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:35.315 00:09:35.315 00:09:35.315 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.315 http://cunit.sourceforge.net/ 00:09:35.315 00:09:35.315 00:09:35.315 Suite: nvme_ns_cmd 00:09:35.315 Test: split_test ...passed 00:09:35.315 Test: split_test2 ...passed 00:09:35.315 Test: split_test3 ...passed 00:09:35.315 Test: split_test4 ...passed 00:09:35.315 Test: test_nvme_ns_cmd_flush ...passed 00:09:35.315 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:35.315 Test: test_nvme_ns_cmd_copy ...passed 00:09:35.315 Test: test_io_flags ...passed 00:09:35.315 Test: test_nvme_ns_cmd_write_zeroes ...[2024-11-20 07:19:39.113516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:35.315 passed 00:09:35.315 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:35.315 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:35.315 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:35.315 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:35.315 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:35.315 Test: test_cmd_child_request ...passed 00:09:35.315 Test: test_nvme_ns_cmd_readv ...passed 00:09:35.315 Test: test_nvme_ns_cmd_readv_sgl ...[2024-11-20 07:19:39.114443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 390:_nvme_ns_cmd_split_request_sgl: *ERROR*: Unable to send I/O. Would require more than the supported number of SGL Elements.passed 00:09:35.315 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_writev ...[2024-11-20 07:19:39.114759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:35.315 passed 00:09:35.315 Test: test_nvme_ns_cmd_write_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_comparev ...passed 00:09:35.315 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:35.315 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:35.315 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:35.315 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:35.315 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-11-20 07:19:39.116327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:35.315 passed 00:09:35.315 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:09:35.315 Test: test_nvme_ns_cmd_verify ...passed 00:09:35.315 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:35.315 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:35.315 00:09:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.315 suites 1 1 n/a 0 0 00:09:35.315 tests 33 33 33 0 0 00:09:35.315 asserts 569 569 569 0 n/a 00:09:35.315 00:09:35.315 Elapsed time = 0.004 seconds 00:09:35.315 [2024-11-20 07:19:39.116413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:35.315 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:35.315 00:09:35.315 00:09:35.315 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.315 http://cunit.sourceforge.net/ 00:09:35.315 00:09:35.315 00:09:35.315 Suite: nvme_ns_cmd 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:35.315 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:35.315 00:09:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.315 suites 1 1 n/a 0 0 00:09:35.315 tests 12 12 12 0 0 00:09:35.315 asserts 123 123 123 0 n/a 00:09:35.315 00:09:35.315 Elapsed time = 0.002 seconds 00:09:35.315 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:35.315 00:09:35.315 00:09:35.315 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.315 http://cunit.sourceforge.net/ 00:09:35.315 00:09:35.315 00:09:35.315 Suite: nvme_qpair 00:09:35.315 Test: test3 ...passed 00:09:35.315 Test: test_ctrlr_failed ...passed 00:09:35.315 Test: struct_packing ...passed 00:09:35.315 Test: test_nvme_qpair_process_completions ...[2024-11-20 07:19:39.212039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:35.315 [2024-11-20 07:19:39.212444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:35.315 [2024-11-20 07:19:39.212563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:09:35.315 [2024-11-20 07:19:39.212636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 1 00:09:35.315 passed 00:09:35.315 Test: test_nvme_completion_is_retry ...passed 00:09:35.315 Test: test_get_status_string ...passed 00:09:35.315 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:09:35.315 Test: test_nvme_qpair_submit_request ...passed 00:09:35.315 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:35.315 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:35.315 Test: test_nvme_qpair_init_deinit ...[2024-11-20 07:19:39.213245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:35.315 passed 00:09:35.315 Test: test_nvme_get_sgl_print_info ...passed 00:09:35.315 00:09:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.315 suites 1 1 n/a 0 0 00:09:35.315 tests 12 12 12 0 0 00:09:35.315 asserts 154 154 154 0 n/a 00:09:35.315 00:09:35.315 Elapsed time = 0.002 seconds 00:09:35.315 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:35.576 00:09:35.576 00:09:35.576 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.576 http://cunit.sourceforge.net/ 00:09:35.576 00:09:35.576 00:09:35.576 Suite: nvme_pcie 00:09:35.576 Test: test_prp_list_append ...[2024-11-20 07:19:39.263131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:35.576 [2024-11-20 07:19:39.263418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1271:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:35.576 [2024-11-20 07:19:39.263467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1261:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:35.577 [2024-11-20 07:19:39.263716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:35.577 passed 00:09:35.577 Test: test_nvme_pcie_hotplug_monitor ...[2024-11-20 07:19:39.263827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:35.577 passed 00:09:35.577 Test: test_shadow_doorbell_update ...passed 00:09:35.577 Test: test_build_contig_hw_sgl_request ...passed 00:09:35.577 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:35.577 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:35.577 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:35.577 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-11-20 07:19:39.264128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:35.577 passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-20 07:19:39.264285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:35.577 passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-11-20 07:19:39.264356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:35.577 passed 00:09:35.577 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-11-20 07:19:39.264416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:35.577 [2024-11-20 07:19:39.264474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:35.577 passed 00:09:35.577 00:09:35.577 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.577 suites 1 1 n/a 0 0 00:09:35.577 tests 14 14 14 0 0 00:09:35.577 asserts 235 235 235 0 n/a 00:09:35.577 00:09:35.577 Elapsed time = 0.001 seconds 00:09:35.577 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:35.577 00:09:35.577 00:09:35.577 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.577 http://cunit.sourceforge.net/ 00:09:35.577 00:09:35.577 00:09:35.577 Suite: nvme_ns_cmd 00:09:35.577 Test: nvme_poll_group_create_test ...passed 00:09:35.577 Test: nvme_poll_group_add_remove_test ...[2024-11-20 07:19:39.308760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_poll_group.c: 188:spdk_nvme_poll_group_add: *ERROR*: Queue pair without interrupts cannot be added to poll group 00:09:35.577 passed 00:09:35.577 Test: nvme_poll_group_process_completions ...passed 00:09:35.577 Test: nvme_poll_group_destroy_test ...passed 00:09:35.577 Test: nvme_poll_group_get_free_stats ...passed 00:09:35.577 00:09:35.577 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.577 suites 1 1 n/a 0 0 00:09:35.577 tests 5 5 5 0 0 00:09:35.577 asserts 103 103 103 0 n/a 00:09:35.577 00:09:35.577 Elapsed time = 0.001 seconds 00:09:35.577 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:35.577 00:09:35.577 00:09:35.577 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.577 http://cunit.sourceforge.net/ 00:09:35.577 00:09:35.577 00:09:35.577 Suite: nvme_quirks 00:09:35.577 Test: test_nvme_quirks_striping ...passed 00:09:35.577 00:09:35.577 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.577 suites 1 1 n/a 0 0 00:09:35.577 tests 1 1 1 0 0 00:09:35.577 asserts 5 5 5 0 n/a 00:09:35.577 00:09:35.577 Elapsed time = 0.000 seconds 00:09:35.577 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:35.577 00:09:35.577 00:09:35.577 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.577 http://cunit.sourceforge.net/ 00:09:35.577 00:09:35.577 00:09:35.577 Suite: nvme_tcp 00:09:35.577 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:35.577 Test: test_nvme_tcp_build_iovs ...passed 00:09:35.577 Test: test_nvme_tcp_build_sgl_request ...passed 00:09:35.577 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-11-20 07:19:39.394345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7dcf19e0d2d0, and the iovcnt=16, remaining_size=28672 00:09:35.577 passed 00:09:35.577 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:35.577 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:35.577 Test: test_nvme_tcp_req_get ...passed 00:09:35.577 Test: test_nvme_tcp_req_init ...passed 00:09:35.577 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:35.577 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:35.577 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:09:35.577 Test: test_nvme_tcp_alloc_reqs ...[2024-11-20 07:19:39.394760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19a09020 is same with the state(7) to be set 00:09:35.577 passed 00:09:35.577 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:09:35.577 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-20 07:19:39.395060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19d09080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1133:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7dcf19c0a760 00:09:35.577 [2024-11-20 07:19:39.395133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1192:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:35.577 [2024-11-20 07:19:39.395160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1143:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:35.577 [2024-11-20 07:19:39.395215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:35.577 [2024-11-20 07:19:39.395267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 [2024-11-20 07:19:39.395386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.577 passed 00:09:35.577 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-20 07:19:39.395414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19c0a080 is same with the state(6) to be set 00:09:35.578 [2024-11-20 07:19:39.395571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:35.578 [2024-11-20 07:19:39.395611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:35.578 Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-20 07:19:39.395902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_icresp_handle ...[2024-11-20 07:19:39.395983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7dcf19c0b5c0): PDU Sequence Error 00:09:35.578 [2024-11-20 07:19:39.396028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1476:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:35.578 [2024-11-20 07:19:39.396053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1483:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:35.578 [2024-11-20 07:19:39.396075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19d0b080 is same with the state(6) to be set 00:09:35.578 [2024-11-20 07:19:39.396101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1492:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:35.578 [2024-11-20 07:19:39.396128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19d0b080 is same with the state(6) to be set 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-20 07:19:39.396150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19d0b080 is same with the state(0) to be set 00:09:35.578 [2024-11-20 07:19:39.396203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7dcf19c0c5c0): PDU Sequence Error 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-20 07:19:39.396273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1553:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7dcf19d0d210 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:09:35.578 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-20 07:19:39.396409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 357:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7dcf19e294b0, errno=0, rc=0 00:09:35.578 [2024-11-20 07:19:39.396435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19e294b0 is same with the state(6) to be set 00:09:35.578 [2024-11-20 07:19:39.396465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf19e294b0 is same with the state(6) to be set 00:09:35.578 [2024-11-20 07:19:39.396503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf19e294b0 (0): Success 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-20 07:19:39.396533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf19e294b0 (0): Success 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:35.578 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:09:35.578 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-20 07:19:39.485724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:35.578 [2024-11-20 07:19:39.485821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:35.578 [2024-11-20 07:19:39.486070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.578 [2024-11-20 07:19:39.486099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.578 [2024-11-20 07:19:39.486288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:35.578 [2024-11-20 07:19:39.486315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.578 [2024-11-20 07:19:39.486383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:35.578 [2024-11-20 07:19:39.486426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.578 [2024-11-20 07:19:39.486518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515000001980 with addr=192.168.1.78, port=23 00:09:35.578 [2024-11-20 07:19:39.486571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.578 passed 00:09:35.578 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-20 07:19:39.486713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x514000000c40, and the iovcnt=1, remaining_size=1024 00:09:35.578 [2024-11-20 07:19:39.486742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 977:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:35.578 passed 00:09:35.578 00:09:35.578 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.578 suites 1 1 n/a 0 0 00:09:35.578 tests 27 27 27 0 0 00:09:35.578 asserts 624 624 624 0 n/a 00:09:35.578 00:09:35.578 Elapsed time = 0.093 seconds 00:09:35.838 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:35.838 00:09:35.838 00:09:35.838 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.838 http://cunit.sourceforge.net/ 00:09:35.838 00:09:35.838 00:09:35.838 Suite: nvme_transport 00:09:35.838 Test: test_nvme_get_transport ...passed 00:09:35.838 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:35.838 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:35.838 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:35.838 Test: test_ctrlr_get_memory_domains ...passed 00:09:35.838 00:09:35.838 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.838 suites 1 1 n/a 0 0 00:09:35.838 tests 5 5 5 0 0 00:09:35.838 asserts 28 28 28 0 n/a 00:09:35.838 00:09:35.838 Elapsed time = 0.000 seconds 00:09:35.838 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:35.838 00:09:35.838 00:09:35.838 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.838 http://cunit.sourceforge.net/ 00:09:35.838 00:09:35.838 00:09:35.838 Suite: nvme_io_msg 00:09:35.838 Test: test_nvme_io_msg_send ...passed 00:09:35.838 Test: test_nvme_io_msg_process ...passed 00:09:35.839 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:35.839 00:09:35.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.839 suites 1 1 n/a 0 0 00:09:35.839 tests 3 3 3 0 0 00:09:35.839 asserts 56 56 56 0 n/a 00:09:35.839 00:09:35.839 Elapsed time = 0.000 seconds 00:09:35.839 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:35.839 00:09:35.839 00:09:35.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.839 http://cunit.sourceforge.net/ 00:09:35.839 00:09:35.839 00:09:35.839 Suite: nvme_pcie_common 00:09:35.839 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:09:35.839 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-20 07:19:39.609927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 112:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:35.839 passed 00:09:35.839 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:35.839 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-20 07:19:39.610914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 541:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:35.839 [2024-11-20 07:19:39.610986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 494:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:35.839 passed 00:09:35.839 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-11-20 07:19:39.611025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 588:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:35.839 passed 00:09:35.839 Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-20 07:19:39.611556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.839 [2024-11-20 07:19:39.611619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.839 passed 00:09:35.839 00:09:35.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.839 suites 1 1 n/a 0 0 00:09:35.839 tests 6 6 6 0 0 00:09:35.839 asserts 148 148 148 0 n/a 00:09:35.839 00:09:35.839 Elapsed time = 0.002 seconds 00:09:35.839 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:35.839 00:09:35.839 00:09:35.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.839 http://cunit.sourceforge.net/ 00:09:35.839 00:09:35.839 00:09:35.839 Suite: nvme_fabric 00:09:35.839 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:35.839 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:35.839 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:35.839 Test: test_nvme_fabric_discover_probe ...passed 00:09:35.839 Test: test_nvme_fabric_qpair_connect ...[2024-11-20 07:19:39.650341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:35.839 passed 00:09:35.839 00:09:35.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.839 suites 1 1 n/a 0 0 00:09:35.839 tests 5 5 5 0 0 00:09:35.839 asserts 60 60 60 0 n/a 00:09:35.839 00:09:35.839 Elapsed time = 0.001 seconds 00:09:35.839 07:19:39 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:35.839 00:09:35.839 00:09:35.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.839 http://cunit.sourceforge.net/ 00:09:35.839 00:09:35.839 00:09:35.839 Suite: nvme_opal 00:09:35.839 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:35.839 Test: test_opal_add_short_atom_header ...passed 00:09:35.839 00:09:35.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.839 suites 1 1 n/a 0 0 00:09:35.839 tests 2 2 2 0 0 00:09:35.839 asserts 22 22 22 0 n/a 00:09:35.839 00:09:35.839 Elapsed time = 0.000 seconds 00:09:35.839 [2024-11-20 07:19:39.701112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:35.839 00:09:35.839 real 0m1.330s 00:09:35.839 user 0m0.605s 00:09:35.839 sys 0m0.586s 00:09:35.839 07:19:39 unittest.unittest_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.839 07:19:39 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.839 ************************************ 00:09:35.839 END TEST unittest_nvme 00:09:35.839 ************************************ 00:09:36.099 07:19:39 unittest -- unit/unittest.sh@231 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:36.099 07:19:39 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.099 07:19:39 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.099 07:19:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.099 ************************************ 00:09:36.099 START TEST unittest_log 00:09:36.099 ************************************ 00:09:36.099 07:19:39 unittest.unittest_log -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:36.099 00:09:36.099 00:09:36.099 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.099 http://cunit.sourceforge.net/ 00:09:36.099 00:09:36.099 00:09:36.099 Suite: log 00:09:36.099 Test: log_test ...[2024-11-20 07:19:39.816563] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:09:36.099 [2024-11-20 07:19:39.816876] log_ut.c: 57:log_test: *DEBUG*: log test 00:09:36.099 log dump test: 00:09:36.099 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:36.099 spdk dump test: 00:09:36.099 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:36.099 spdk dump test: 00:09:36.099 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:36.099 passed 00:09:36.099 Test: deprecation ...00000010 65 20 63 68 61 72 73 e chars 00:09:37.038 passed 00:09:37.038 Test: log_ext_test ...passed 00:09:37.038 00:09:37.038 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.038 suites 1 1 n/a 0 0 00:09:37.038 tests 3 3 3 0 0 00:09:37.038 asserts 77 77 77 0 n/a 00:09:37.038 00:09:37.038 Elapsed time = 0.001 seconds 00:09:37.038 00:09:37.038 real 0m1.040s 00:09:37.038 user 0m0.019s 00:09:37.038 sys 0m0.022s 00:09:37.038 07:19:40 unittest.unittest_log -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.038 07:19:40 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:09:37.038 ************************************ 00:09:37.038 END TEST unittest_log 00:09:37.038 ************************************ 00:09:37.038 07:19:40 unittest -- unit/unittest.sh@232 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:37.038 07:19:40 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.038 07:19:40 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.038 07:19:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.038 ************************************ 00:09:37.038 START TEST unittest_lvol 00:09:37.038 ************************************ 00:09:37.038 07:19:40 unittest.unittest_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:37.038 00:09:37.038 00:09:37.038 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.038 http://cunit.sourceforge.net/ 00:09:37.038 00:09:37.038 00:09:37.038 Suite: lvol 00:09:37.038 Test: lvs_init_unload_success ...[2024-11-20 07:19:40.927133] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:37.038 passed 00:09:37.038 Test: lvs_init_destroy_success ...[2024-11-20 07:19:40.927615] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:37.038 passed 00:09:37.038 Test: lvs_init_opts_success ...passed 00:09:37.038 Test: lvs_unload_lvs_is_null_fail ...passed 00:09:37.038 Test: lvs_names ...[2024-11-20 07:19:40.927894] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:37.038 [2024-11-20 07:19:40.927940] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:37.038 [2024-11-20 07:19:40.927988] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:37.038 [2024-11-20 07:19:40.928162] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:37.038 passed 00:09:37.038 Test: lvol_create_destroy_success ...passed 00:09:37.038 Test: lvol_create_fail ...passed 00:09:37.038 Test: lvol_destroy_fail ...[2024-11-20 07:19:40.928883] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:37.038 [2024-11-20 07:19:40.928996] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:37.038 passed 00:09:37.038 Test: lvol_close ...[2024-11-20 07:19:40.929331] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:37.038 [2024-11-20 07:19:40.929540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:37.039 [2024-11-20 07:19:40.929586] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:37.039 passed 00:09:37.039 Test: lvol_resize ...passed 00:09:37.039 Test: lvol_set_read_only ...passed 00:09:37.039 Test: test_lvs_load ...passed 00:09:37.039 Test: lvols_load ...[2024-11-20 07:19:40.930530] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:37.039 [2024-11-20 07:19:40.930584] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:37.039 [2024-11-20 07:19:40.930832] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:37.039 [2024-11-20 07:19:40.930965] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:37.039 passed 00:09:37.039 Test: lvol_open ...passed 00:09:37.039 Test: lvol_snapshot ...passed 00:09:37.039 Test: lvol_snapshot_fail ...passed 00:09:37.039 Test: lvol_clone ...[2024-11-20 07:19:40.931593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:37.039 passed 00:09:37.039 Test: lvol_clone_fail ...[2024-11-20 07:19:40.932063] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:37.039 passed 00:09:37.039 Test: lvol_iter_clones ...passed 00:09:37.039 Test: lvol_refcnt ...passed 00:09:37.039 Test: lvol_names ...[2024-11-20 07:19:40.932584] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 53570553-e201-43d0-b224-31338c33caf0 because it is still open 00:09:37.039 [2024-11-20 07:19:40.932757] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null tpassed 00:09:37.039 Test: lvol_create_thin_provisioned ...passed 00:09:37.039 Test: lvol_rename ...erminator. 00:09:37.039 [2024-11-20 07:19:40.932909] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:37.039 [2024-11-20 07:19:40.933087] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:37.039 [2024-11-20 07:19:40.933543] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:37.039 [2024-11-20 07:19:40.933625] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:37.039 passed 00:09:37.039 Test: lvs_rename ...passed 00:09:37.039 Test: lvol_inflate ...[2024-11-20 07:19:40.933919] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:37.039 passed 00:09:37.039 Test: lvol_decouple_parent ...[2024-11-20 07:19:40.934092] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:37.039 [2024-11-20 07:19:40.934266] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:37.039 passed 00:09:37.039 Test: lvol_get_xattr ...passed 00:09:37.039 Test: lvol_esnap_reload ...passed 00:09:37.039 Test: lvol_esnap_create_bad_args ...passed 00:09:37.039 Test: lvol_esnap_create_delete ...[2024-11-20 07:19:40.934611] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:37.039 [2024-11-20 07:19:40.934634] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:37.039 [2024-11-20 07:19:40.934660] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:37.039 [2024-11-20 07:19:40.934725] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:37.039 [2024-11-20 07:19:40.934797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:37.039 passed 00:09:37.039 Test: lvol_esnap_load_esnaps ...passed 00:09:37.039 Test: lvol_esnap_missing ...[2024-11-20 07:19:40.934995] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:37.039 [2024-11-20 07:19:40.935084] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:37.039 [2024-11-20 07:19:40.935115] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:37.039 passed 00:09:37.039 Test: lvol_esnap_hotplug ... 00:09:37.039 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:37.039 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:37.039 lvol_esnap_hotplug scenario 2: PASS - one missing, cb returns -ENOMEM 00:09:37.039 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:37.039 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:37.039 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:37.039 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:37.039 [2024-11-20 07:19:40.935579] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c8e53312-d57e-4bd4-9dd3-2c66e5e5b296: failed to create esnap bs_dev: error -12 00:09:37.039 [2024-11-20 07:19:40.935713] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6d6fb3d9-7da2-47d1-a9a3-3334e2a76511: failed to create esnap bs_dev: error -12 00:09:37.039 [2024-11-20 07:19:40.935773] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a0be7f7b-dab0-40fc-9fb1-de57c88e34c2: failed to create esnap bs_dev: error -12 00:09:37.039 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:37.039 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:37.039 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:37.039 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:37.039 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:37.039 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:37.039 passed 00:09:37.039 Test: lvol_get_by ...passed 00:09:37.039 Test: lvol_shallow_copy ...passed 00:09:37.039 Test: lvol_set_parent ...[2024-11-20 07:19:40.936540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:37.039 [2024-11-20 07:19:40.936575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 81e4de9f-caba-4b45-9719-daf118fd48d0 shallow copy, ext_dev must not be NULL 00:09:37.039 [2024-11-20 07:19:40.936727] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:09:37.039 [2024-11-20 07:19:40.936755] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:09:37.039 passed 00:09:37.039 Test: lvol_set_external_parent ...passed 00:09:37.039 00:09:37.039 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.039 suites 1 1 n/a 0 0 00:09:37.039 tests 37 37 37 0 0 00:09:37.039 asserts 1505 1505 1505 0 n/a 00:09:37.039 00:09:37.039 Elapsed time = 0.010 seconds 00:09:37.039 [2024-11-20 07:19:40.936882] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:09:37.039 [2024-11-20 07:19:40.936904] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:09:37.039 [2024-11-20 07:19:40.936923] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:09:37.039 00:09:37.039 real 0m0.052s 00:09:37.039 user 0m0.021s 00:09:37.039 sys 0m0.032s 00:09:37.039 07:19:40 unittest.unittest_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.039 ************************************ 00:09:37.039 END TEST unittest_lvol 00:09:37.039 ************************************ 00:09:37.039 07:19:40 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 07:19:40 unittest -- unit/unittest.sh@233 -- # [[ y == y ]] 00:09:37.300 07:19:40 unittest -- unit/unittest.sh@234 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:37.300 07:19:40 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.300 07:19:40 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.300 07:19:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 ************************************ 00:09:37.300 START TEST unittest_nvme_rdma 00:09:37.300 ************************************ 00:09:37.300 07:19:41 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:37.300 00:09:37.300 00:09:37.300 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.300 http://cunit.sourceforge.net/ 00:09:37.300 00:09:37.300 00:09:37.300 Suite: nvme_rdma 00:09:37.300 Test: test_nvme_rdma_build_sgl_request ...[2024-11-20 07:19:41.041363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:37.300 passed 00:09:37.300 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-11-20 07:19:41.041662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1577:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:37.300 [2024-11-20 07:19:41.041936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:37.300 passed 00:09:37.300 Test: test_nvme_rdma_build_contig_request ...[2024-11-20 07:19:41.042280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1529:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:37.300 passed 00:09:37.300 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:37.300 Test: test_nvme_rdma_create_reqs ...passed 00:09:37.300 Test: test_nvme_rdma_create_rsps ...passed 00:09:37.300 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:09:37.300 Test: test_nvme_rdma_poller_create ...passed 00:09:37.300 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:09:37.300 Test: test_nvme_rdma_ctrlr_construct ...[2024-11-20 07:19:41.042618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 921:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:37.300 [2024-11-20 07:19:41.043176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 839:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:37.300 [2024-11-20 07:19:41.043402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:37.300 [2024-11-20 07:19:41.043440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:37.300 [2024-11-20 07:19:41.043720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 447:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:37.300 passed 00:09:37.300 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:37.300 Test: test_nvme_rdma_req_init ...passed 00:09:37.300 Test: test_nvme_rdma_validate_cm_event ...passed 00:09:37.300 Test: test_nvme_rdma_qpair_init ...[2024-11-20 07:19:41.044172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:37.300 [2024-11-20 07:19:41.044224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:37.300 passed 00:09:37.300 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:37.300 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:37.300 Test: test_rdma_get_memory_translation ...passed 00:09:37.300 Test: test_get_rdma_qpair_from_wc ...passed 00:09:37.300 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:37.300 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:09:37.300 Test: test_nvme_rdma_qpair_set_poller ...passed 00:09:37.300 00:09:37.300 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.300 suites 1 1 n/a 0 0 00:09:37.300 tests 21 21 21 0 0 00:09:37.300 asserts 395 395 395 0 n/a 00:09:37.300 00:09:37.300 Elapsed time = 0.003 seconds 00:09:37.300 [2024-11-20 07:19:41.044361] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:37.300 [2024-11-20 07:19:41.044408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:37.300 [2024-11-20 07:19:41.044590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:37.300 [2024-11-20 07:19:41.044616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:37.300 [2024-11-20 07:19:41.044791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:37.300 [2024-11-20 07:19:41.044832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:37.300 [2024-11-20 07:19:41.044866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x760826113200 on poll group 0x50c000000040 00:09:37.300 [2024-11-20 07:19:41.044913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:37.300 [2024-11-20 07:19:41.044947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:37.300 [2024-11-20 07:19:41.044982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x760826113200 on poll group 0x50c000000040 00:09:37.300 [2024-11-20 07:19:41.045054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 622:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:37.300 00:09:37.300 real 0m0.047s 00:09:37.300 user 0m0.023s 00:09:37.300 sys 0m0.024s 00:09:37.300 07:19:41 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.300 ************************************ 00:09:37.300 END TEST unittest_nvme_rdma 00:09:37.300 ************************************ 00:09:37.300 07:19:41 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 07:19:41 unittest -- unit/unittest.sh@235 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:37.300 07:19:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.300 07:19:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.300 07:19:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 ************************************ 00:09:37.300 START TEST unittest_nvmf_transport 00:09:37.300 ************************************ 00:09:37.300 07:19:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:37.300 00:09:37.300 00:09:37.300 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.300 http://cunit.sourceforge.net/ 00:09:37.300 00:09:37.300 00:09:37.300 Suite: nvmf 00:09:37.300 Test: test_spdk_nvmf_transport_create ...[2024-11-20 07:19:41.156419] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:37.300 [2024-11-20 07:19:41.156739] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:37.300 [2024-11-20 07:19:41.156811] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:37.300 [2024-11-20 07:19:41.156908] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:37.300 passed 00:09:37.300 Test: test_nvmf_transport_poll_group_create ...passed 00:09:37.300 Test: test_spdk_nvmf_transport_opts_init ...passed 00:09:37.300 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:37.300 00:09:37.300 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.300 suites 1 1 n/a 0 0 00:09:37.300 tests 4 4 4 0 0 00:09:37.300 asserts 49 49 49 0 n/a 00:09:37.300 00:09:37.300 Elapsed time = 0.001 seconds 00:09:37.300 [2024-11-20 07:19:41.157323] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:37.300 [2024-11-20 07:19:41.157382] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:37.300 [2024-11-20 07:19:41.157428] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:37.300 00:09:37.300 real 0m0.058s 00:09:37.300 user 0m0.028s 00:09:37.300 sys 0m0.030s 00:09:37.300 ************************************ 00:09:37.300 END TEST unittest_nvmf_transport 00:09:37.300 ************************************ 00:09:37.300 07:19:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.300 07:19:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:09:37.560 07:19:41 unittest -- unit/unittest.sh@236 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:37.560 07:19:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.560 07:19:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.560 07:19:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.560 ************************************ 00:09:37.560 START TEST unittest_rdma 00:09:37.560 ************************************ 00:09:37.560 07:19:41 unittest.unittest_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:37.560 00:09:37.560 00:09:37.560 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.560 http://cunit.sourceforge.net/ 00:09:37.560 00:09:37.561 00:09:37.561 Suite: rdma_common 00:09:37.561 Test: test_spdk_rdma_pd ...[2024-11-20 07:19:41.279122] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:09:37.561 [2024-11-20 07:19:41.279447] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:09:37.561 passed 00:09:37.561 00:09:37.561 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.561 suites 1 1 n/a 0 0 00:09:37.561 tests 1 1 1 0 0 00:09:37.561 asserts 31 31 31 0 n/a 00:09:37.561 00:09:37.561 Elapsed time = 0.001 seconds 00:09:37.561 00:09:37.561 real 0m0.045s 00:09:37.561 user 0m0.024s 00:09:37.561 sys 0m0.022s 00:09:37.561 07:19:41 unittest.unittest_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.561 07:19:41 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:37.561 ************************************ 00:09:37.561 END TEST unittest_rdma 00:09:37.561 ************************************ 00:09:37.561 07:19:41 unittest -- unit/unittest.sh@237 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.561 ************************************ 00:09:37.561 START TEST unittest_nvmf_rdma 00:09:37.561 ************************************ 00:09:37.561 07:19:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:37.561 00:09:37.561 00:09:37.561 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.561 http://cunit.sourceforge.net/ 00:09:37.561 00:09:37.561 00:09:37.561 Suite: nvmf 00:09:37.561 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-20 07:19:41.395721] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:37.561 [2024-11-20 07:19:41.395999] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:37.561 [2024-11-20 07:19:41.396051] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:37.561 passed 00:09:37.561 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:37.561 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:37.561 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:37.561 Test: test_nvmf_rdma_opts_init ...passed 00:09:37.561 Test: test_nvmf_rdma_request_free_data ...passed 00:09:37.561 Test: test_nvmf_rdma_resources_create ...passed 00:09:37.561 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:37.561 Test: test_nvmf_rdma_resize_cq ...[2024-11-20 07:19:41.399613] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:37.561 Using CQ of insufficient size may lead to CQ overrun 00:09:37.561 [2024-11-20 07:19:41.399668] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:37.561 passed 00:09:37.561 00:09:37.561 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.561 suites 1 1 n/a 0 0 00:09:37.561 tests 9 9 9 0 0 00:09:37.561 asserts 579 579 579 0 n/a 00:09:37.561 00:09:37.561 Elapsed time = 0.004 seconds[2024-11-20 07:19:41.399755] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 968:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:37.561 00:09:37.561 00:09:37.561 real 0m0.063s 00:09:37.561 user 0m0.034s 00:09:37.561 sys 0m0.029s 00:09:37.561 07:19:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.561 07:19:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:37.561 ************************************ 00:09:37.561 END TEST unittest_nvmf_rdma 00:09:37.561 ************************************ 00:09:37.561 07:19:41 unittest -- unit/unittest.sh@240 -- # [[ y == y ]] 00:09:37.561 07:19:41 unittest -- unit/unittest.sh@241 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.561 07:19:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:37.821 ************************************ 00:09:37.821 START TEST unittest_nvme_cuse 00:09:37.821 ************************************ 00:09:37.821 07:19:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:37.821 00:09:37.821 00:09:37.821 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.821 http://cunit.sourceforge.net/ 00:09:37.821 00:09:37.821 00:09:37.821 Suite: nvme_cuse 00:09:37.821 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:37.821 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:37.821 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:37.821 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:37.821 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:37.821 Test: test_cuse_nvme_submit_io ...[2024-11-20 07:19:41.515559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:37.821 passed 00:09:37.821 Test: test_cuse_nvme_reset ...[2024-11-20 07:19:41.515854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:37.821 passed 00:09:38.390 Test: test_nvme_cuse_stop ...passed 00:09:38.390 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:38.390 00:09:38.390 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.390 suites 1 1 n/a 0 0 00:09:38.390 tests 9 9 9 0 0 00:09:38.390 asserts 118 118 118 0 n/a 00:09:38.390 00:09:38.390 Elapsed time = 0.505 seconds 00:09:38.390 00:09:38.390 real 0m0.541s 00:09:38.390 user 0m0.095s 00:09:38.390 sys 0m0.447s 00:09:38.390 07:19:42 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.390 07:19:42 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:09:38.390 ************************************ 00:09:38.390 END TEST unittest_nvme_cuse 00:09:38.390 ************************************ 00:09:38.390 07:19:42 unittest -- unit/unittest.sh@244 -- # run_test unittest_nvmf unittest_nvmf 00:09:38.390 07:19:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.390 07:19:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.390 07:19:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:38.390 ************************************ 00:09:38.390 START TEST unittest_nvmf 00:09:38.390 ************************************ 00:09:38.390 07:19:42 unittest.unittest_nvmf -- common/autotest_common.sh@1129 -- # unittest_nvmf 00:09:38.390 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:38.390 00:09:38.390 00:09:38.390 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.390 http://cunit.sourceforge.net/ 00:09:38.390 00:09:38.390 00:09:38.390 Suite: nvmf 00:09:38.390 Test: test_get_log_page ...[2024-11-20 07:19:42.107414] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:38.390 passed 00:09:38.390 Test: test_process_fabrics_cmd ...[2024-11-20 07:19:42.107640] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:09:38.390 passed 00:09:38.390 Test: test_connect ...[2024-11-20 07:19:42.108237] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1013:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:38.390 [2024-11-20 07:19:42.108287] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 876:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:38.390 [2024-11-20 07:19:42.108301] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1052:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:38.390 [2024-11-20 07:19:42.108328] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:38.390 [2024-11-20 07:19:42.108420] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:38.390 [2024-11-20 07:19:42.108452] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:38.390 [2024-11-20 07:19:42.108488] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:38.390 [2024-11-20 07:19:42.108515] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 927:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:38.390 [2024-11-20 07:19:42.108586] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:38.390 [2024-11-20 07:19:42.108660] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 677:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:38.390 [2024-11-20 07:19:42.108915] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:38.390 [2024-11-20 07:19:42.108976] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:38.390 [2024-11-20 07:19:42.109025] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:38.390 [2024-11-20 07:19:42.109087] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:38.390 [2024-11-20 07:19:42.109166] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:09:38.390 [2024-11-20 07:19:42.109295] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:09:38.390 [2024-11-20 07:19:42.109349] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:09:38.390 passed 00:09:38.390 Test: test_get_ns_id_desc_list ...passed 00:09:38.390 Test: test_identify_ns ...[2024-11-20 07:19:42.109635] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:38.390 [2024-11-20 07:19:42.109853] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:38.390 passed 00:09:38.390 Test: test_identify_ns_iocs_specific ...[2024-11-20 07:19:42.109937] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:38.390 [2024-11-20 07:19:42.110062] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:38.390 [2024-11-20 07:19:42.110281] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:38.390 passed 00:09:38.390 Test: test_reservation_write_exclusive ...passed 00:09:38.390 Test: test_reservation_exclusive_access ...passed 00:09:38.390 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:38.390 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:38.390 Test: test_reservation_notification_log_page ...passed 00:09:38.390 Test: test_get_dif_ctx ...passed 00:09:38.390 Test: test_set_get_features ...[2024-11-20 07:19:42.110714] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:38.390 [2024-11-20 07:19:42.110744] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:38.390 [2024-11-20 07:19:42.110767] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1660:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:38.390 [2024-11-20 07:19:42.110796] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1736:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:38.390 passed 00:09:38.390 Test: test_identify_ctrlr ...passed 00:09:38.390 Test: test_identify_ctrlr_iocs_specific ...passed 00:09:38.390 Test: test_custom_admin_cmd ...passed 00:09:38.390 Test: test_fused_compare_and_write ...[2024-11-20 07:19:42.111187] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4368:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:38.390 [2024-11-20 07:19:42.111228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4357:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:38.390 passed 00:09:38.390 Test: test_multi_async_event_reqs ...[2024-11-20 07:19:42.111261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4375:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:38.390 passed 00:09:38.390 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:38.391 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:38.391 Test: test_multi_async_events ...passed 00:09:38.391 Test: test_rae ...passed 00:09:38.391 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:38.391 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:38.391 Test: test_spdk_nvmf_request_zcopy_start ...[2024-11-20 07:19:42.111802] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:09:38.391 [2024-11-20 07:19:42.111844] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:09:38.391 passed 00:09:38.391 Test: test_zcopy_read ...passed 00:09:38.391 Test: test_zcopy_write ...passed 00:09:38.391 Test: test_nvmf_property_set ...passed 00:09:38.391 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-20 07:19:42.112015] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:38.391 passed 00:09:38.391 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-20 07:19:42.112048] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:38.391 [2024-11-20 07:19:42.112087] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1971:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:38.391 [2024-11-20 07:19:42.112110] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1977:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:38.391 [2024-11-20 07:19:42.112139] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:38.391 [2024-11-20 07:19:42.112155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:38.391 passed 00:09:38.391 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:09:38.391 Test: test_nvmf_check_qpair_active ...[2024-11-20 07:19:42.112329] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:09:38.391 [2024-11-20 07:19:42.112354] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4874:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:09:38.391 [2024-11-20 07:19:42.112388] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:09:38.391 passed[2024-11-20 07:19:42.112409] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:09:38.391 [2024-11-20 07:19:42.112419] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:09:38.391 00:09:38.391 00:09:38.391 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.391 suites 1 1 n/a 0 0 00:09:38.391 tests 32 32 32 0 0 00:09:38.391 asserts 993 993 993 0 n/a 00:09:38.391 00:09:38.391 Elapsed time = 0.005 seconds 00:09:38.391 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:38.391 00:09:38.391 00:09:38.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.391 http://cunit.sourceforge.net/ 00:09:38.391 00:09:38.391 00:09:38.391 Suite: nvmf 00:09:38.391 Test: test_get_rw_params ...passed 00:09:38.391 Test: test_get_rw_ext_params ...passed 00:09:38.391 Test: test_lba_in_range ...passed 00:09:38.391 Test: test_get_dif_ctx ...passed 00:09:38.391 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:38.391 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-20 07:19:42.161455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 499:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:38.391 [2024-11-20 07:19:42.161737] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:38.391 [2024-11-20 07:19:42.161796] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 514:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:38.391 passed 00:09:38.391 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-20 07:19:42.161875] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1018:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:38.391 [2024-11-20 07:19:42.161922] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1025:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:38.391 passed 00:09:38.391 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-20 07:19:42.161985] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 453:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:38.391 [2024-11-20 07:19:42.162032] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 460:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:38.391 [2024-11-20 07:19:42.162094] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 552:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:38.391 [2024-11-20 07:19:42.162140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 559:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:38.391 passed 00:09:38.391 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:38.391 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:38.391 00:09:38.391 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.391 suites 1 1 n/a 0 0 00:09:38.391 tests 10 10 10 0 0 00:09:38.391 asserts 159 159 159 0 n/a 00:09:38.391 00:09:38.391 Elapsed time = 0.001 seconds 00:09:38.391 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:38.391 00:09:38.391 00:09:38.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.391 http://cunit.sourceforge.net/ 00:09:38.391 00:09:38.391 00:09:38.391 Suite: nvmf 00:09:38.391 Test: test_discovery_log ...passed 00:09:38.391 Test: test_discovery_log_with_filters ...passed 00:09:38.391 00:09:38.391 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.391 suites 1 1 n/a 0 0 00:09:38.391 tests 2 2 2 0 0 00:09:38.391 asserts 238 238 238 0 n/a 00:09:38.391 00:09:38.391 Elapsed time = 0.003 seconds 00:09:38.391 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:38.391 00:09:38.391 00:09:38.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.391 http://cunit.sourceforge.net/ 00:09:38.391 00:09:38.391 00:09:38.391 Suite: nvmf 00:09:38.391 Test: nvmf_test_create_subsystem ...[2024-11-20 07:19:42.264173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:38.391 [2024-11-20 07:19:42.264441] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:09:38.391 [2024-11-20 07:19:42.264573] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:38.391 [2024-11-20 07:19:42.264615] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:09:38.391 [2024-11-20 07:19:42.264657] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:38.391 [2024-11-20 07:19:42.264707] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:09:38.391 [2024-11-20 07:19:42.264750] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:38.391 [2024-11-20 07:19:42.264787] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:09:38.391 [2024-11-20 07:19:42.264816] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:38.391 [2024-11-20 07:19:42.264865] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:09:38.391 [2024-11-20 07:19:42.264905] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:38.391 [2024-11-20 07:19:42.264934] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:09:38.391 [2024-11-20 07:19:42.265039] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:38.391 [2024-11-20 07:19:42.265073] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:09:38.391 [2024-11-20 07:19:42.265206] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:38.391 [2024-11-20 07:19:42.265240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:09:38.391 [2024-11-20 07:19:42.265352] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:38.391 [2024-11-20 07:19:42.265389] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:09:38.391 [2024-11-20 07:19:42.265425] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:38.391 [2024-11-20 07:19:42.265448] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:38.391 [2024-11-20 07:19:42.265490] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:38.391 passed 00:09:38.391 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-20 07:19:42.265522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:38.392 [2024-11-20 07:19:42.265883] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:38.392 [2024-11-20 07:19:42.265934] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2096:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invpassed 00:09:38.392 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...alid NSID 4294967295 00:09:38.392 [2024-11-20 07:19:42.266153] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2230:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:09:38.392 passed 00:09:38.392 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:38.392 Test: test_spdk_nvmf_ns_visible ...[2024-11-20 07:19:42.266407] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:09:38.392 passed 00:09:38.392 Test: test_reservation_register ...[2024-11-20 07:19:42.266990] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 [2024-11-20 07:19:42.267121] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3277:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:38.392 passed 00:09:38.392 Test: test_reservation_register_with_ptpl ...passed 00:09:38.392 Test: test_reservation_acquire_preempt_1 ...[2024-11-20 07:19:42.268344] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:38.392 Test: test_reservation_release ...[2024-11-20 07:19:42.270249] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_unregister_notification ...[2024-11-20 07:19:42.270448] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_release_notification ...[2024-11-20 07:19:42.270628] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_release_notification_write_exclusive ...[2024-11-20 07:19:42.270837] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_clear_notification ...[2024-11-20 07:19:42.271041] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_reservation_preempt_notification ...[2024-11-20 07:19:42.271248] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:38.392 passed 00:09:38.392 Test: test_spdk_nvmf_ns_event ...passed 00:09:38.392 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:38.392 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:38.392 Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-20 07:19:42.272199] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:38.392 passed 00:09:38.392 Test: test_nvmf_ns_reservation_report ...[2024-11-20 07:19:42.272289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:09:38.392 [2024-11-20 07:19:42.272391] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3582:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:38.392 passed 00:09:38.392 Test: test_nvmf_nqn_is_valid ...passed 00:09:38.392 Test: test_nvmf_ns_reservation_restore ...[2024-11-20 07:19:42.272441] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:38.392 [2024-11-20 07:19:42.272464] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:0fb6eb13-a9a2-4fac-8116-c0724f27cd3": uuid is not the correct length 00:09:38.392 [2024-11-20 07:19:42.272483] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:38.392 [2024-11-20 07:19:42.272557] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2776:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:38.392 passed 00:09:38.392 Test: test_nvmf_subsystem_state_change ...passed 00:09:38.392 Test: test_nvmf_reservation_custom_ops ...passed 00:09:38.392 00:09:38.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.392 suites 1 1 n/a 0 0 00:09:38.392 tests 24 24 24 0 0 00:09:38.392 asserts 499 499 499 0 n/a 00:09:38.392 00:09:38.392 Elapsed time = 0.009 seconds 00:09:38.392 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:38.652 00:09:38.652 00:09:38.652 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.652 http://cunit.sourceforge.net/ 00:09:38.652 00:09:38.652 00:09:38.652 Suite: nvmf 00:09:38.652 Test: test_nvmf_tcp_create ...passed 00:09:38.652 Test: test_nvmf_tcp_destroy ...[2024-11-20 07:19:42.350614] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 811:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:38.652 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:38.652 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:38.652 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:38.652 Test: test_nvmf_tcp_qpair_init_mem_resource ...[2024-11-20 07:19:42.463648] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd09cb0 is same with the state(5) to be set 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-20 07:19:42.496642] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:38.652 Test: test_nvmf_tcp_icreq_handle ...[2024-11-20 07:19:42.496734] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0b030 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.496767] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0b030 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.496795] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 [2024-11-20 07:19:42.496823] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0b030 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.496928] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:38.652 [2024-11-20 07:19:42.496972] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 [2024-11-20 07:19:42.497003] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0d190 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.497032] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:38.652 [2024-11-20 07:19:42.497062] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0d190 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.497095] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 [2024-11-20 07:19:42.497119] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0d190 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.497163] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_check_xfer_type ...[2024-11-20 07:19:42.497186] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd0d190 is same with the state(6) to be set 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_invalid_sgl ...[2024-11-20 07:19:42.497253] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2697:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:38.652 [2024-11-20 07:19:42.497282] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 passed 00:09:38.652 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-20 07:19:42.497314] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bd116f0 is same with the state(6) to be set 00:09:38.652 [2024-11-20 07:19:42.497355] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2415:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x79366bc0c8e0 00:09:38.652 [2024-11-20 07:19:42.497389] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.652 [2024-11-20 07:19:42.497426] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2472:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x79366bc0c030 00:09:38.653 [2024-11-20 07:19:42.497476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497503] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497530] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2425:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:38.653 [2024-11-20 07:19:42.497553] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497580] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497607] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2464:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:38.653 [2024-11-20 07:19:42.497632] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497664] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497732] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497762] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497789] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497812] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497835] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497871] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497893] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497919] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 [2024-11-20 07:19:42.497952] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 [2024-11-20 07:19:42.497979] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:38.653 passed 00:09:38.653 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-20 07:19:42.498007] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79366bc0c030 is same with the state(6) to be set 00:09:38.653 passed 00:09:38.653 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:09:38.653 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-20 07:19:42.531891] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 584:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:38.653 [2024-11-20 07:19:42.531975] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 595:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:38.653 passed 00:09:38.653 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-20 07:19:42.533091] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 651:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:38.653 [2024-11-20 07:19:42.533150] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 656:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:38.653 passed[2024-11-20 07:19:42.533745] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 725:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:38.653 [2024-11-20 07:19:42.533787] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 749:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:38.653 00:09:38.653 00:09:38.653 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.653 suites 1 1 n/a 0 0 00:09:38.653 tests 17 17 17 0 0 00:09:38.653 asserts 215 215 215 0 n/a 00:09:38.653 00:09:38.653 Elapsed time = 0.210 seconds 00:09:38.913 07:19:42 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:38.913 00:09:38.913 00:09:38.913 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.913 http://cunit.sourceforge.net/ 00:09:38.913 00:09:38.913 00:09:38.913 Suite: nvmf 00:09:38.913 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:38.913 00:09:38.913 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.913 suites 1 1 n/a 0 0 00:09:38.913 tests 1 1 1 0 0 00:09:38.913 asserts 17 17 17 0 n/a 00:09:38.913 00:09:38.913 Elapsed time = 0.019 seconds 00:09:38.913 00:09:38.913 real 0m0.611s 00:09:38.913 user 0m0.265s 00:09:38.913 sys 0m0.344s 00:09:38.913 07:19:42 unittest.unittest_nvmf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.913 07:19:42 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:09:38.913 ************************************ 00:09:38.913 END TEST unittest_nvmf 00:09:38.913 ************************************ 00:09:38.913 07:19:42 unittest -- unit/unittest.sh@245 -- # [[ n == y ]] 00:09:38.913 07:19:42 unittest -- unit/unittest.sh@250 -- # [[ n == y ]] 00:09:38.913 07:19:42 unittest -- unit/unittest.sh@254 -- # run_test unittest_scsi unittest_scsi 00:09:38.913 07:19:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.913 07:19:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.913 07:19:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:38.913 ************************************ 00:09:38.913 START TEST unittest_scsi 00:09:38.913 ************************************ 00:09:38.913 07:19:42 unittest.unittest_scsi -- common/autotest_common.sh@1129 -- # unittest_scsi 00:09:38.913 07:19:42 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:38.913 00:09:38.913 00:09:38.913 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.913 http://cunit.sourceforge.net/ 00:09:38.913 00:09:38.913 00:09:38.913 Suite: dev_suite 00:09:38.913 Test: dev_destruct_null_dev ...passed 00:09:38.913 Test: dev_destruct_zero_luns ...passed 00:09:38.913 Test: dev_destruct_null_lun ...passed 00:09:38.913 Test: dev_destruct_success ...passed 00:09:38.913 Test: dev_construct_num_luns_zero ...[2024-11-20 07:19:42.784525] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:38.913 passed 00:09:38.913 Test: dev_construct_no_lun_zero ...passed 00:09:38.913 Test: dev_construct_null_lun ...[2024-11-20 07:19:42.784938] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:38.913 passed 00:09:38.913 Test: dev_construct_name_too_long ...[2024-11-20 07:19:42.785020] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:38.913 passed 00:09:38.913 Test: dev_construct_success ...[2024-11-20 07:19:42.785103] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:38.913 passed 00:09:38.913 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:38.913 Test: dev_queue_mgmt_task_success ...passed 00:09:38.913 Test: dev_queue_task_success ...passed 00:09:38.913 Test: dev_stop_success ...passed 00:09:38.913 Test: dev_add_port_max_ports ...[2024-11-20 07:19:42.785615] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:38.913 passed 00:09:38.913 Test: dev_add_port_construct_failure1 ...[2024-11-20 07:19:42.785742] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:38.913 passed 00:09:38.913 Test: dev_add_port_construct_failure2 ...[2024-11-20 07:19:42.785792] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:38.913 passed 00:09:38.913 Test: dev_add_port_success1 ...passed 00:09:38.913 Test: dev_add_port_success2 ...passed 00:09:38.913 Test: dev_add_port_success3 ...passed 00:09:38.913 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:38.913 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:38.913 Test: dev_find_port_by_id_success ...passed 00:09:38.913 Test: dev_add_lun_bdev_not_found ...passed 00:09:38.913 Test: dev_add_lun_no_free_lun_id ...passed 00:09:38.913 Test: dev_add_lun_success1 ...[2024-11-20 07:19:42.786516] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:38.913 passed 00:09:38.913 Test: dev_add_lun_success2 ...passed 00:09:38.913 Test: dev_check_pending_tasks ...passed 00:09:38.913 Test: dev_iterate_luns ...passed 00:09:38.913 Test: dev_find_free_lun ...passed 00:09:38.913 00:09:38.913 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.913 suites 1 1 n/a 0 0 00:09:38.913 tests 29 29 29 0 0 00:09:38.913 asserts 97 97 97 0 n/a 00:09:38.913 00:09:38.913 Elapsed time = 0.003 seconds 00:09:38.913 07:19:42 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:38.913 00:09:38.913 00:09:38.913 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.913 http://cunit.sourceforge.net/ 00:09:38.913 00:09:38.913 00:09:38.913 Suite: lun_suite 00:09:38.913 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:09:38.913 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-20 07:19:42.839643] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:38.913 passed 00:09:38.913 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:38.913 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:38.913 Test: lun_task_mgmt_execute_invalid_case ...[2024-11-20 07:19:42.839956] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:38.913 [2024-11-20 07:19:42.840110] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:38.913 passed 00:09:38.913 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:38.913 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:38.913 Test: lun_append_task_null_lun_not_supported ...passed 00:09:38.913 Test: lun_execute_scsi_task_pending ...passed 00:09:38.913 Test: lun_execute_scsi_task_complete ...passed 00:09:38.913 Test: lun_execute_scsi_task_resize ...passed 00:09:38.913 Test: lun_destruct_success ...passed 00:09:38.913 Test: lun_construct_null_ctx ...passed 00:09:38.914 Test: lun_construct_success ...[2024-11-20 07:19:42.840346] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:39.173 passed 00:09:39.173 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:39.173 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:39.173 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:39.173 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:39.173 00:09:39.173 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.173 suites 1 1 n/a 0 0 00:09:39.173 tests 18 18 18 0 0 00:09:39.173 asserts 153 153 153 0 n/a 00:09:39.173 00:09:39.173 Elapsed time = 0.001 seconds 00:09:39.173 07:19:42 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:39.173 00:09:39.173 00:09:39.173 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.173 http://cunit.sourceforge.net/ 00:09:39.173 00:09:39.173 00:09:39.173 Suite: scsi_suite 00:09:39.173 Test: scsi_init ...passed 00:09:39.173 00:09:39.173 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.173 suites 1 1 n/a 0 0 00:09:39.173 tests 1 1 1 0 0 00:09:39.173 asserts 1 1 1 0 n/a 00:09:39.173 00:09:39.173 Elapsed time = 0.000 seconds 00:09:39.173 07:19:42 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:39.173 00:09:39.173 00:09:39.173 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.173 http://cunit.sourceforge.net/ 00:09:39.173 00:09:39.173 00:09:39.173 Suite: translation_suite 00:09:39.173 Test: mode_select_6_test ...passed 00:09:39.173 Test: mode_select_6_test2 ...passed 00:09:39.173 Test: mode_sense_6_test ...passed 00:09:39.173 Test: mode_sense_10_test ...passed 00:09:39.173 Test: inquiry_evpd_test ...passed 00:09:39.173 Test: inquiry_standard_test ...passed 00:09:39.173 Test: inquiry_overflow_test ...passed 00:09:39.173 Test: task_complete_test ...passed 00:09:39.173 Test: lba_range_test ...passed 00:09:39.173 Test: xfer_len_test ...[2024-11-20 07:19:42.932521] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:39.173 passed 00:09:39.173 Test: xfer_test ...passed 00:09:39.173 Test: scsi_name_padding_test ...passed 00:09:39.173 Test: get_dif_ctx_test ...passed 00:09:39.173 Test: unmap_split_test ...passed 00:09:39.173 00:09:39.173 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.173 suites 1 1 n/a 0 0 00:09:39.173 tests 14 14 14 0 0 00:09:39.173 asserts 1205 1205 1205 0 n/a 00:09:39.173 00:09:39.173 Elapsed time = 0.006 seconds 00:09:39.173 07:19:42 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:39.173 00:09:39.173 00:09:39.173 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.173 http://cunit.sourceforge.net/ 00:09:39.173 00:09:39.173 00:09:39.173 Suite: reservation_suite 00:09:39.173 Test: test_reservation_register ...[2024-11-20 07:19:42.979228] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.173 passed 00:09:39.173 Test: test_reservation_reserve ...[2024-11-20 07:19:42.979625] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.173 [2024-11-20 07:19:42.979763] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:39.173 [2024-11-20 07:19:42.979864] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:39.173 passed 00:09:39.174 Test: test_all_registrant_reservation_reserve ...[2024-11-20 07:19:42.979975] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 passed 00:09:39.174 Test: test_all_registrant_reservation_access ...[2024-11-20 07:19:42.980149] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 [2024-11-20 07:19:42.980253] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:09:39.174 [2024-11-20 07:19:42.980316] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:09:39.174 passed 00:09:39.174 Test: test_reservation_preempt_non_all_regs ...[2024-11-20 07:19:42.980437] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 [2024-11-20 07:19:42.980524] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:39.174 passed 00:09:39.174 Test: test_reservation_preempt_all_regs ...[2024-11-20 07:19:42.980649] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 passed 00:09:39.174 Test: test_reservation_cmds_conflict ...[2024-11-20 07:19:42.980845] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 [2024-11-20 07:19:42.980958] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:39.174 [2024-11-20 07:19:42.981023] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:39.174 [2024-11-20 07:19:42.981069] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:39.174 [2024-11-20 07:19:42.981128] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:39.174 [2024-11-20 07:19:42.981188] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:39.174 passed 00:09:39.174 Test: test_scsi2_reserve_release ...passed 00:09:39.174 Test: test_pr_with_scsi2_reserve_release ...[2024-11-20 07:19:42.981299] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:39.174 passed 00:09:39.174 00:09:39.174 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.174 suites 1 1 n/a 0 0 00:09:39.174 tests 9 9 9 0 0 00:09:39.174 asserts 344 344 344 0 n/a 00:09:39.174 00:09:39.174 Elapsed time = 0.002 seconds 00:09:39.174 00:09:39.174 real 0m0.230s 00:09:39.174 user 0m0.118s 00:09:39.174 sys 0m0.115s 00:09:39.174 07:19:42 unittest.unittest_scsi -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.174 07:19:42 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:09:39.174 ************************************ 00:09:39.174 END TEST unittest_scsi 00:09:39.174 ************************************ 00:09:39.174 07:19:43 unittest -- unit/unittest.sh@255 -- # uname -s 00:09:39.174 07:19:43 unittest -- unit/unittest.sh@255 -- # '[' Linux = Linux ']' 00:09:39.174 07:19:43 unittest -- unit/unittest.sh@258 -- # run_test unittest_sock unittest_sock 00:09:39.174 07:19:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.174 07:19:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.174 07:19:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:39.174 ************************************ 00:09:39.174 START TEST unittest_sock 00:09:39.174 ************************************ 00:09:39.174 07:19:43 unittest.unittest_sock -- common/autotest_common.sh@1129 -- # unittest_sock 00:09:39.174 07:19:43 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:39.174 00:09:39.174 00:09:39.174 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.174 http://cunit.sourceforge.net/ 00:09:39.174 00:09:39.174 00:09:39.174 Suite: sock 00:09:39.433 Test: posix_sock ...passed 00:09:39.433 Test: ut_sock ...passed 00:09:39.433 Test: posix_sock_group ...passed 00:09:39.433 Test: ut_sock_group ...passed 00:09:39.433 Test: posix_sock_group_fairness ...passed 00:09:39.433 Test: _posix_sock_close ...passed 00:09:39.433 Test: sock_get_default_opts ...passed 00:09:39.433 Test: ut_sock_impl_get_set_opts ...passed 00:09:39.433 Test: posix_sock_impl_get_set_opts ...passed 00:09:39.433 Test: ut_sock_map ...passed 00:09:39.433 Test: override_impl_opts ...passed 00:09:39.433 Test: ut_sock_group_get_ctx ...passed 00:09:39.433 Test: posix_get_interface_name ...passed 00:09:39.433 00:09:39.433 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.433 suites 1 1 n/a 0 0 00:09:39.433 tests 13 13 13 0 0 00:09:39.433 asserts 360 360 360 0 n/a 00:09:39.433 00:09:39.433 Elapsed time = 0.014 seconds 00:09:39.433 07:19:43 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:39.433 00:09:39.433 00:09:39.433 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.433 http://cunit.sourceforge.net/ 00:09:39.433 00:09:39.433 00:09:39.433 Suite: posix 00:09:39.433 Test: flush ...passed 00:09:39.433 00:09:39.433 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.433 suites 1 1 n/a 0 0 00:09:39.434 tests 1 1 1 0 0 00:09:39.434 asserts 28 28 28 0 n/a 00:09:39.434 00:09:39.434 Elapsed time = 0.000 seconds 00:09:39.434 07:19:43 unittest.unittest_sock -- unit/unittest.sh@128 -- # [[ n == y ]] 00:09:39.434 00:09:39.434 real 0m0.124s 00:09:39.434 user 0m0.046s 00:09:39.434 sys 0m0.056s 00:09:39.434 07:19:43 unittest.unittest_sock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.434 07:19:43 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:09:39.434 ************************************ 00:09:39.434 END TEST unittest_sock 00:09:39.434 ************************************ 00:09:39.434 07:19:43 unittest -- unit/unittest.sh@260 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:39.434 07:19:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.434 07:19:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.434 07:19:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:39.434 ************************************ 00:09:39.434 START TEST unittest_thread 00:09:39.434 ************************************ 00:09:39.434 07:19:43 unittest.unittest_thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:39.434 00:09:39.434 00:09:39.434 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.434 http://cunit.sourceforge.net/ 00:09:39.434 00:09:39.434 00:09:39.434 Suite: io_channel 00:09:39.434 Test: thread_alloc ...passed 00:09:39.434 Test: thread_send_msg ...passed 00:09:39.434 Test: thread_poller ...passed 00:09:39.434 Test: poller_pause ...passed 00:09:39.434 Test: thread_for_each ...passed 00:09:39.434 Test: for_each_channel_remove ...passed 00:09:39.434 Test: for_each_channel_unreg ...[2024-11-20 07:19:43.298473] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x76e820509640 already registered (old:0x513000000200 new:0x5130000003c0) 00:09:39.434 passed 00:09:39.434 Test: thread_name ...passed 00:09:39.434 Test: channel ...[2024-11-20 07:19:43.301886] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2327:spdk_get_io_channel: *ERROR*: could not find io_device 0x58cc174b43c0 00:09:39.434 passed 00:09:39.434 Test: channel_destroy_races ...passed 00:09:39.434 Test: thread_exit_test ...[2024-11-20 07:19:43.306067] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 654:thread_exit: *ERROR*: thread 0x519000007380 got timeout, and move it to the exited state forcefully 00:09:39.434 passed 00:09:39.434 Test: thread_update_stats_test ...passed 00:09:39.434 Test: nested_channel ...passed 00:09:39.434 Test: device_unregister_and_thread_exit_race ...passed 00:09:39.434 Test: cache_closest_timed_poller ...passed 00:09:39.434 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:39.434 Test: io_device_lookup ...passed 00:09:39.434 Test: spdk_spin ...[2024-11-20 07:19:43.314982] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3111:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:39.434 [2024-11-20 07:19:43.315040] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x76e82050a020 00:09:39.434 [2024-11-20 07:19:43.315053] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3149:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:39.434 [2024-11-20 07:19:43.316462] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:39.434 [2024-11-20 07:19:43.316528] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x76e82050a020 00:09:39.434 [2024-11-20 07:19:43.316551] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:39.434 [2024-11-20 07:19:43.316582] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x76e82050a020 00:09:39.434 [2024-11-20 07:19:43.316592] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:39.434 [2024-11-20 07:19:43.316623] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x76e82050a020 00:09:39.434 [2024-11-20 07:19:43.316657] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3093:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:39.434 [2024-11-20 07:19:43.316680] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x76e82050a020 00:09:39.434 passed 00:09:39.434 Test: for_each_channel_and_thread_exit_race ...passed 00:09:39.434 Test: for_each_thread_and_thread_exit_race ...passed 00:09:39.434 Test: poller_get_name ...passed 00:09:39.434 Test: poller_get_id ...passed 00:09:39.434 Test: poller_get_state_str ...passed 00:09:39.434 Test: poller_get_period_ticks ...passed 00:09:39.434 Test: poller_get_stats ...passed 00:09:39.434 00:09:39.434 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.434 suites 1 1 n/a 0 0 00:09:39.434 tests 25 25 25 0 0 00:09:39.434 asserts 429 429 429 0 n/a 00:09:39.434 00:09:39.434 Elapsed time = 0.051 seconds 00:09:39.434 00:09:39.434 real 0m0.096s 00:09:39.434 user 0m0.062s 00:09:39.434 sys 0m0.035s 00:09:39.434 07:19:43 unittest.unittest_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.434 07:19:43 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.434 ************************************ 00:09:39.434 END TEST unittest_thread 00:09:39.434 ************************************ 00:09:39.694 07:19:43 unittest -- unit/unittest.sh@261 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:39.694 07:19:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.694 07:19:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.694 07:19:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:39.694 ************************************ 00:09:39.694 START TEST unittest_iobuf 00:09:39.694 ************************************ 00:09:39.694 07:19:43 unittest.unittest_iobuf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:39.694 00:09:39.694 00:09:39.694 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.694 http://cunit.sourceforge.net/ 00:09:39.694 00:09:39.694 00:09:39.694 Suite: io_channel 00:09:39.694 Test: iobuf ...passed 00:09:39.694 Test: iobuf_cache ...[2024-11-20 07:19:43.444233] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:39.694 [2024-11-20 07:19:43.444468] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:39.694 [2024-11-20 07:19:43.444570] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:39.694 [2024-11-20 07:19:43.444606] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:39.694 [2024-11-20 07:19:43.444675] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:39.694 [2024-11-20 07:19:43.444730] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:39.694 passed 00:09:39.694 Test: iobuf_priority ...passed 00:09:39.694 00:09:39.694 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.694 suites 1 1 n/a 0 0 00:09:39.694 tests 3 3 3 0 0 00:09:39.694 asserts 127 127 127 0 n/a 00:09:39.694 00:09:39.695 Elapsed time = 0.009 seconds 00:09:39.695 00:09:39.695 real 0m0.061s 00:09:39.695 user 0m0.032s 00:09:39.695 sys 0m0.030s 00:09:39.695 07:19:43 unittest.unittest_iobuf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.695 07:19:43 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:09:39.695 ************************************ 00:09:39.695 END TEST unittest_iobuf 00:09:39.695 ************************************ 00:09:39.695 07:19:43 unittest -- unit/unittest.sh@262 -- # run_test unittest_util unittest_util 00:09:39.695 07:19:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.695 07:19:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.695 07:19:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:39.695 ************************************ 00:09:39.695 START TEST unittest_util 00:09:39.695 ************************************ 00:09:39.695 07:19:43 unittest.unittest_util -- common/autotest_common.sh@1129 -- # unittest_util 00:09:39.695 07:19:43 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:39.695 00:09:39.695 00:09:39.695 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.695 http://cunit.sourceforge.net/ 00:09:39.695 00:09:39.695 00:09:39.695 Suite: base64 00:09:39.695 Test: test_base64_get_encoded_strlen ...passed 00:09:39.695 Test: test_base64_get_decoded_len ...passed 00:09:39.695 Test: test_base64_encode ...passed 00:09:39.695 Test: test_base64_decode ...passed 00:09:39.695 Test: test_base64_urlsafe_encode ...passed 00:09:39.695 Test: test_base64_urlsafe_decode ...passed 00:09:39.695 00:09:39.695 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.695 suites 1 1 n/a 0 0 00:09:39.695 tests 6 6 6 0 0 00:09:39.695 asserts 112 112 112 0 n/a 00:09:39.695 00:09:39.695 Elapsed time = 0.000 seconds 00:09:39.695 07:19:43 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:39.695 00:09:39.695 00:09:39.695 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.695 http://cunit.sourceforge.net/ 00:09:39.695 00:09:39.695 00:09:39.695 Suite: bit_array 00:09:39.695 Test: test_1bit ...passed 00:09:39.695 Test: test_64bit ...passed 00:09:39.695 Test: test_find ...passed 00:09:39.695 Test: test_resize ...passed 00:09:39.695 Test: test_errors ...passed 00:09:39.695 Test: test_count ...passed 00:09:39.695 Test: test_mask_store_load ...passed 00:09:39.695 Test: test_mask_clear ...passed 00:09:39.695 00:09:39.695 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.695 suites 1 1 n/a 0 0 00:09:39.695 tests 8 8 8 0 0 00:09:39.695 asserts 5075 5075 5075 0 n/a 00:09:39.695 00:09:39.695 Elapsed time = 0.002 seconds 00:09:39.954 07:19:43 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:39.954 00:09:39.954 00:09:39.954 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.954 http://cunit.sourceforge.net/ 00:09:39.954 00:09:39.954 00:09:39.954 Suite: cpuset 00:09:39.954 Test: test_cpuset ...passed 00:09:39.954 Test: test_cpuset_parse ...passed 00:09:39.954 Test: test_cpuset_fmt ...passed 00:09:39.954 Test: test_cpuset_foreach ...passed 00:09:39.954 00:09:39.954 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.954 suites 1 1 n/a 0 0 00:09:39.954 tests 4 4 4 0 0 00:09:39.954 asserts 90 90 90 0 n/a 00:09:39.954 00:09:39.954 Elapsed time = 0.002 seconds 00:09:39.954 [2024-11-20 07:19:43.640999] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:39.954 [2024-11-20 07:19:43.641214] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:39.954 [2024-11-20 07:19:43.641251] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:39.955 [2024-11-20 07:19:43.641281] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:39.955 [2024-11-20 07:19:43.641310] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:39.955 [2024-11-20 07:19:43.641342] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:39.955 [2024-11-20 07:19:43.641371] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:39.955 [2024-11-20 07:19:43.641404] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: crc16 00:09:39.955 Test: test_crc16_t10dif ...passed 00:09:39.955 Test: test_crc16_t10dif_seed ...passed 00:09:39.955 Test: test_crc16_t10dif_copy ...passed 00:09:39.955 00:09:39.955 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.955 suites 1 1 n/a 0 0 00:09:39.955 tests 3 3 3 0 0 00:09:39.955 asserts 5 5 5 0 n/a 00:09:39.955 00:09:39.955 Elapsed time = 0.000 seconds 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: crc32_ieee 00:09:39.955 Test: test_crc32_ieee ...passed 00:09:39.955 00:09:39.955 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.955 suites 1 1 n/a 0 0 00:09:39.955 tests 1 1 1 0 0 00:09:39.955 asserts 1 1 1 0 n/a 00:09:39.955 00:09:39.955 Elapsed time = 0.000 seconds 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: crc32c 00:09:39.955 Test: test_crc32c ...passed 00:09:39.955 Test: test_crc32c_nvme ...passed 00:09:39.955 00:09:39.955 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.955 suites 1 1 n/a 0 0 00:09:39.955 tests 2 2 2 0 0 00:09:39.955 asserts 16 16 16 0 n/a 00:09:39.955 00:09:39.955 Elapsed time = 0.000 seconds 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: crc64 00:09:39.955 Test: test_crc64_nvme ...passed 00:09:39.955 00:09:39.955 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.955 suites 1 1 n/a 0 0 00:09:39.955 tests 1 1 1 0 0 00:09:39.955 asserts 4 4 4 0 n/a 00:09:39.955 00:09:39.955 Elapsed time = 0.001 seconds 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: string 00:09:39.955 Test: test_parse_ip_addr ...passed 00:09:39.955 Test: test_str_chomp ...passed 00:09:39.955 Test: test_parse_capacity ...passed 00:09:39.955 Test: test_sprintf_append_realloc ...passed 00:09:39.955 Test: test_strtol ...passed 00:09:39.955 Test: test_strtoll ...passed 00:09:39.955 Test: test_strarray ...passed 00:09:39.955 Test: test_strcpy_replace ...passed 00:09:39.955 00:09:39.955 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.955 suites 1 1 n/a 0 0 00:09:39.955 tests 8 8 8 0 0 00:09:39.955 asserts 161 161 161 0 n/a 00:09:39.955 00:09:39.955 Elapsed time = 0.001 seconds 00:09:39.955 07:19:43 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:39.955 00:09:39.955 00:09:39.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.955 http://cunit.sourceforge.net/ 00:09:39.955 00:09:39.955 00:09:39.955 Suite: dif 00:09:39.955 Test: dif_generate_and_verify_test ...[2024-11-20 07:19:43.876274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:39.955 [2024-11-20 07:19:43.876829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:39.955 [2024-11-20 07:19:43.877206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:39.955 [2024-11-20 07:19:43.877559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:39.955 [2024-11-20 07:19:43.877932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:39.955 [2024-11-20 07:19:43.878305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:39.955 passed 00:09:39.955 Test: dif_disable_check_test ...[2024-11-20 07:19:43.879540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:39.955 [2024-11-20 07:19:43.879930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:39.955 [2024-11-20 07:19:43.880325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:40.217 passed 00:09:40.217 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-20 07:19:43.881617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:40.217 [2024-11-20 07:19:43.881977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:40.217 [2024-11-20 07:19:43.882296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:40.217 [2024-11-20 07:19:43.882639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:40.217 [2024-11-20 07:19:43.882975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:40.217 [2024-11-20 07:19:43.883364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:40.217 [2024-11-20 07:19:43.883577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:40.217 [2024-11-20 07:19:43.883807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:40.217 [2024-11-20 07:19:43.884027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:40.217 [2024-11-20 07:19:43.884276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:40.217 passed 00:09:40.217 Test: dif_apptag_mask_test ...[2024-11-20 07:19:43.884501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:40.218 [2024-11-20 07:19:43.884733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:40.218 [2024-11-20 07:19:43.884960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:40.218 passed 00:09:40.218 Test: dif_sec_8_md_8_error_test ...[2024-11-20 07:19:43.885128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:09:40.218 passed 00:09:40.218 Test: dif_sec_512_md_0_error_test ...passed 00:09:40.218 Test: dif_sec_512_md_16_error_test ...[2024-11-20 07:19:43.885159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.218 [2024-11-20 07:19:43.885189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.218 passed 00:09:40.218 Test: dif_sec_4096_md_0_8_error_test ...passed 00:09:40.218 Test: dif_sec_4100_md_128_error_test ...[2024-11-20 07:19:43.885209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.218 [2024-11-20 07:19:43.885230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.218 [2024-11-20 07:19:43.885270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.218 [2024-11-20 07:19:43.885295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.218 [2024-11-20 07:19:43.885317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.218 passed 00:09:40.218 Test: dif_guard_seed_test ...[2024-11-20 07:19:43.885342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.218 [2024-11-20 07:19:43.885358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.218 passed 00:09:40.218 Test: dif_guard_value_test ...passed 00:09:40.218 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:40.218 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:40.218 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 07:19:43.918017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.218 [2024-11-20 07:19:43.919804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.218 [2024-11-20 07:19:43.921546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.923207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.924860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.926473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.928150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.218 [2024-11-20 07:19:43.929002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.218 [2024-11-20 07:19:43.929885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.218 [2024-11-20 07:19:43.931425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.218 [2024-11-20 07:19:43.933097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.934696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.936248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.937851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.939404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.218 [2024-11-20 07:19:43.940193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.218 [2024-11-20 07:19:43.940960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.218 [2024-11-20 07:19:43.942504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.218 [2024-11-20 07:19:43.944122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.945672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.947244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.218 [2024-11-20 07:19:43.948803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.218 [2024-11-20 07:19:43.950429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.218 passed 00:09:40.218 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-20 07:19:43.951202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.218 [2024-11-20 07:19:43.951391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.218 [2024-11-20 07:19:43.951580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.218 [2024-11-20 07:19:43.951777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.951963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.952236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.952423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.952619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.218 [2024-11-20 07:19:43.952791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.218 [2024-11-20 07:19:43.952954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.218 [2024-11-20 07:19:43.953134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.218 [2024-11-20 07:19:43.953327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.953511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.953699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.953874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.218 [2024-11-20 07:19:43.954055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.218 [2024-11-20 07:19:43.954222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.218 [2024-11-20 07:19:43.954364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.218 [2024-11-20 07:19:43.954548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.218 [2024-11-20 07:19:43.954748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.954951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.218 [2024-11-20 07:19:43.955153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.218 [2024-11-20 07:19:43.955342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.218 [2024-11-20 07:19:43.955536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.218 [2024-11-20 07:19:43.955703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.218 passed 00:09:40.218 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-20 07:19:43.955897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.218 [2024-11-20 07:19:43.956088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.219 [2024-11-20 07:19:43.956289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.956490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.956692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.956884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.957063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.219 [2024-11-20 07:19:43.957221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.219 [2024-11-20 07:19:43.957381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.219 [2024-11-20 07:19:43.957566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.219 [2024-11-20 07:19:43.957762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.957936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.958120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.958356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.958543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.219 [2024-11-20 07:19:43.958708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.219 [2024-11-20 07:19:43.958863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.219 [2024-11-20 07:19:43.959044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.219 [2024-11-20 07:19:43.959241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.959428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.959621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.959812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.960004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.219 [2024-11-20 07:19:43.960165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.219 passed 00:09:40.219 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-20 07:19:43.960365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.219 [2024-11-20 07:19:43.960550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.219 [2024-11-20 07:19:43.960751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.960942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.961132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.961325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.961586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.219 [2024-11-20 07:19:43.961754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.219 [2024-11-20 07:19:43.961905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.219 [2024-11-20 07:19:43.962092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.219 [2024-11-20 07:19:43.962278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.962464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.962645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.962851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.963037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.219 [2024-11-20 07:19:43.963192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.219 [2024-11-20 07:19:43.963354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.219 [2024-11-20 07:19:43.963543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.219 [2024-11-20 07:19:43.963738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.963922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.964120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.964312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.964503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBApassed 00:09:40.219 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.219 [2024-11-20 07:19:43.964712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.219 [2024-11-20 07:19:43.964885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.219 [2024-11-20 07:19:43.965064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.219 [2024-11-20 07:19:43.965265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.965455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.965642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.965829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.966016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.219 passed 00:09:40.219 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-20 07:19:43.966169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.219 [2024-11-20 07:19:43.966366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.219 [2024-11-20 07:19:43.966559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.219 [2024-11-20 07:19:43.966764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.966956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.967151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.967340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.219 [2024-11-20 07:19:43.967524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.219 [2024-11-20 07:19:43.967694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.219 [2024-11-20 07:19:43.967877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.219 [2024-11-20 07:19:43.968070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.219 [2024-11-20 07:19:43.968269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.968469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.968695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.968907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.219 [2024-11-20 07:19:43.969126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.219 passed 00:09:40.219 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-20 07:19:43.969312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.219 [2024-11-20 07:19:43.969515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:09:40.219 [2024-11-20 07:19:43.969732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:09:40.219 [2024-11-20 07:19:43.969937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.219 [2024-11-20 07:19:43.970155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.220 [2024-11-20 07:19:43.970361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.220 [2024-11-20 07:19:43.970561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.220 [2024-11-20 07:19:43.970792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a1f2 00:09:40.220 passed 00:09:40.220 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-20 07:19:43.970969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=895a 00:09:40.220 [2024-11-20 07:19:43.971169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:09:40.220 [2024-11-20 07:19:43.971387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:09:40.220 [2024-11-20 07:19:43.971600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.220 [2024-11-20 07:19:43.971826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.220 [2024-11-20 07:19:43.972032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.220 [2024-11-20 07:19:43.972257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:09:40.220 [2024-11-20 07:19:43.972470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=649efe23 00:09:40.220 [2024-11-20 07:19:43.972654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2d21e427 00:09:40.220 [2024-11-20 07:19:43.972861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:09:40.220 [2024-11-20 07:19:43.973071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:09:40.220 [2024-11-20 07:19:43.973273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.220 [2024-11-20 07:19:43.973495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:09:40.220 [2024-11-20 07:19:43.973718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.220 [2024-11-20 07:19:43.973938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:09:40.220 [2024-11-20 07:19:43.974156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1faed8cc972c0cca 00:09:40.220 passed 00:09:40.220 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-11-20 07:19:43.974331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=72208b9601d66033 00:09:40.220 passed 00:09:40.220 Test: dif_copy_sec_512_md_8_dif_disable_single_iov ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:40.220 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:40.220 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:40.220 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:40.220 Test: dif_copy_sec_512_md_8_prchk_7_multi_bounce_iovs_complex_splits ...passed 00:09:40.220 Test: dif_copy_sec_512_md_8_dif_disable_multi_bounce_iovs_complex_splits ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 07:19:44.013434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd48, Actual=fd4c 00:09:40.220 [2024-11-20 07:19:44.014222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98b4, Actual=98b0 00:09:40.220 [2024-11-20 07:19:44.014981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.015724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.016442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.220 [2024-11-20 07:19:44.017151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.220 [2024-11-20 07:19:44.017869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=448f 00:09:40.220 [2024-11-20 07:19:44.018561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=7577 00:09:40.220 [2024-11-20 07:19:44.019276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753e9, Actual=1ab753ed 00:09:40.220 [2024-11-20 07:19:44.019977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=af1c42b5, Actual=af1c42b1 00:09:40.220 [2024-11-20 07:19:44.020699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.021383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.022080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.220 [2024-11-20 07:19:44.022775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.220 [2024-11-20 07:19:44.023488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=d802aeeb 00:09:40.220 [2024-11-20 07:19:44.024192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=e74fc468 00:09:40.220 [2024-11-20 07:19:44.024962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:09:40.220 [2024-11-20 07:19:44.025647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=7b8c310120b9ce1f, Actual=7b8c310120b9ce1b 00:09:40.220 [2024-11-20 07:19:44.026366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.027074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.027799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=40000005d 00:09:40.220 [2024-11-20 07:19:44.028564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=40000005d 00:09:40.220 [2024-11-20 07:19:44.029283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=95fc9ec27c5092dd 00:09:40.220 passed 00:09:40.220 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-20 07:19:44.029996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=a93c888d5554e8fb 00:09:40.220 [2024-11-20 07:19:44.030216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:09:40.220 [2024-11-20 07:19:44.030402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:09:40.220 [2024-11-20 07:19:44.030628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.030806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.030982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.220 [2024-11-20 07:19:44.031148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.220 [2024-11-20 07:19:44.031321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=448f 00:09:40.220 [2024-11-20 07:19:44.031485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=16f6 00:09:40.220 [2024-11-20 07:19:44.031660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e9, Actual=1ab753ed 00:09:40.220 [2024-11-20 07:19:44.031837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3723d364, Actual=3723d360 00:09:40.220 [2024-11-20 07:19:44.032015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.032183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.032391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.220 [2024-11-20 07:19:44.032585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.220 [2024-11-20 07:19:44.032791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d802aeeb 00:09:40.220 [2024-11-20 07:19:44.032983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c202f449, Actual=7f7055b9 00:09:40.220 [2024-11-20 07:19:44.033192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:09:40.220 [2024-11-20 07:19:44.033389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=8e6e3e9206564a40, Actual=8e6e3e9206564a44 00:09:40.220 [2024-11-20 07:19:44.033591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.033794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.220 [2024-11-20 07:19:44.034004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:09:40.220 [2024-11-20 07:19:44.034169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:09:40.220 [2024-11-20 07:19:44.034359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=95fc9ec27c5092dd 00:09:40.220 passed 00:09:40.220 Test: dix_sec_0_md_8_error ...passed 00:09:40.221 Test: dix_sec_512_md_0_error ...passed 00:09:40.221 Test: dix_sec_512_md_16_error ...passed 00:09:40.221 Test: dix_sec_4096_md_0_8_error ...passed 00:09:40.221 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...[2024-11-20 07:19:44.034531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=5cde871e73bb6ca4 00:09:40.221 [2024-11-20 07:19:44.034576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:09:40.221 [2024-11-20 07:19:44.034597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.221 [2024-11-20 07:19:44.034621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.221 [2024-11-20 07:19:44.034635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:40.221 [2024-11-20 07:19:44.034653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.221 [2024-11-20 07:19:44.034661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.221 [2024-11-20 07:19:44.034702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.221 [2024-11-20 07:19:44.034710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:40.221 passed 00:09:40.221 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:40.221 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:40.221 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:40.221 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 07:19:44.065826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd48, Actual=fd4c 00:09:40.221 [2024-11-20 07:19:44.066732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98b4, Actual=98b0 00:09:40.221 [2024-11-20 07:19:44.067629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.068529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.069447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.221 [2024-11-20 07:19:44.070298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.221 [2024-11-20 07:19:44.071092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=448f 00:09:40.221 [2024-11-20 07:19:44.072009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=7577 00:09:40.221 [2024-11-20 07:19:44.072940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753e9, Actual=1ab753ed 00:09:40.221 [2024-11-20 07:19:44.073799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=af1c42b5, Actual=af1c42b1 00:09:40.221 [2024-11-20 07:19:44.074604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.075380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.076237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.221 [2024-11-20 07:19:44.077123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:09:40.221 [2024-11-20 07:19:44.078005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=d802aeeb 00:09:40.221 [2024-11-20 07:19:44.078832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=e74fc468 00:09:40.221 [2024-11-20 07:19:44.079659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:09:40.221 [2024-11-20 07:19:44.080517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=7b8c310120b9ce1f, Actual=7b8c310120b9ce1b 00:09:40.221 [2024-11-20 07:19:44.081364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.082080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.082792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=40000005d 00:09:40.221 [2024-11-20 07:19:44.083606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=40000005d 00:09:40.221 [2024-11-20 07:19:44.084419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=95fc9ec27c5092dd 00:09:40.221 passed 00:09:40.221 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-20 07:19:44.085231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=a93c888d5554e8fb 00:09:40.221 [2024-11-20 07:19:44.085453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:09:40.221 [2024-11-20 07:19:44.085614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:09:40.221 [2024-11-20 07:19:44.085828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.086022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.086269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.221 [2024-11-20 07:19:44.086476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.221 [2024-11-20 07:19:44.086676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=448f 00:09:40.221 [2024-11-20 07:19:44.086879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=16f6 00:09:40.221 [2024-11-20 07:19:44.087096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e9, Actual=1ab753ed 00:09:40.221 [2024-11-20 07:19:44.087285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=6e9c7740, Actual=6e9c7744 00:09:40.221 [2024-11-20 07:19:44.087480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.087694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.087891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.221 [2024-11-20 07:19:44.088100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:09:40.221 [2024-11-20 07:19:44.088330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d802aeeb 00:09:40.221 [2024-11-20 07:19:44.088536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=26cff19d 00:09:40.221 [2024-11-20 07:19:44.088753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:09:40.221 [2024-11-20 07:19:44.088946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=8e6e3e9206564a40, Actual=8e6e3e9206564a44 00:09:40.221 [2024-11-20 07:19:44.089161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.089378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:40.221 [2024-11-20 07:19:44.089592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:09:40.221 [2024-11-20 07:19:44.089810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:09:40.221 [2024-11-20 07:19:44.090028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=95fc9ec27c5092dd 00:09:40.221 passed 00:09:40.221 Test: set_md_interleave_iovs_test ...[2024-11-20 07:19:44.090234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=5cde871e73bb6ca4 00:09:40.221 passed 00:09:40.221 Test: set_md_interleave_iovs_split_test ...passed 00:09:40.221 Test: dif_generate_stream_pi_16_test ...passed 00:09:40.221 Test: dif_generate_stream_test ...passed 00:09:40.221 Test: set_md_interleave_iovs_alignment_test ...[2024-11-20 07:19:44.096274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1946:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:40.221 passed 00:09:40.221 Test: dif_generate_split_test ...passed 00:09:40.221 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:40.221 Test: dif_verify_split_test ...passed 00:09:40.221 Test: dif_verify_stream_multi_segments_test ...passed 00:09:40.221 Test: update_crc32c_pi_16_test ...passed 00:09:40.221 Test: update_crc32c_test ...passed 00:09:40.221 Test: dif_update_crc32c_split_test ...passed 00:09:40.221 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:40.221 Test: get_range_with_md_test ...passed 00:09:40.221 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:40.221 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:40.221 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:40.221 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:40.221 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:40.221 Test: dif_generate_and_verify_unmap_test ...passed 00:09:40.221 Test: dif_pi_format_check_test ...passed 00:09:40.221 Test: dif_type_check_test ...passed 00:09:40.221 00:09:40.221 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.221 suites 1 1 n/a 0 0 00:09:40.221 tests 90 90 90 0 0 00:09:40.221 asserts 3705 3705 3705 0 n/a 00:09:40.221 00:09:40.221 Elapsed time = 0.249 seconds 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:40.482 00:09:40.482 00:09:40.482 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.482 http://cunit.sourceforge.net/ 00:09:40.482 00:09:40.482 00:09:40.482 Suite: iov 00:09:40.482 Test: test_single_iov ...passed 00:09:40.482 Test: test_simple_iov ...passed 00:09:40.482 Test: test_complex_iov ...passed 00:09:40.482 Test: test_iovs_to_buf ...passed 00:09:40.482 Test: test_buf_to_iovs ...passed 00:09:40.482 Test: test_memset ...passed 00:09:40.482 Test: test_iov_one ...passed 00:09:40.482 Test: test_iov_xfer ...passed 00:09:40.482 00:09:40.482 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.482 suites 1 1 n/a 0 0 00:09:40.482 tests 8 8 8 0 0 00:09:40.482 asserts 156 156 156 0 n/a 00:09:40.482 00:09:40.482 Elapsed time = 0.000 seconds 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:40.482 00:09:40.482 00:09:40.482 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.482 http://cunit.sourceforge.net/ 00:09:40.482 00:09:40.482 00:09:40.482 Suite: math 00:09:40.482 Test: test_serial_number_arithmetic ...passed 00:09:40.482 Suite: erase 00:09:40.482 Test: test_memset_s ...passed 00:09:40.482 00:09:40.482 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.482 suites 2 2 n/a 0 0 00:09:40.482 tests 2 2 2 0 0 00:09:40.482 asserts 18 18 18 0 n/a 00:09:40.482 00:09:40.482 Elapsed time = 0.000 seconds 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:40.482 00:09:40.482 00:09:40.482 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.482 http://cunit.sourceforge.net/ 00:09:40.482 00:09:40.482 00:09:40.482 Suite: pipe 00:09:40.482 Test: test_create_destroy ...passed 00:09:40.482 Test: test_write_get_buffer ...passed 00:09:40.482 Test: test_write_advance ...passed 00:09:40.482 Test: test_read_get_buffer ...passed 00:09:40.482 Test: test_read_advance ...passed 00:09:40.482 Test: test_data ...passed 00:09:40.482 00:09:40.482 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.482 suites 1 1 n/a 0 0 00:09:40.482 tests 6 6 6 0 0 00:09:40.482 asserts 251 251 251 0 n/a 00:09:40.482 00:09:40.482 Elapsed time = 0.000 seconds 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@146 -- # uname -s 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@146 -- # '[' Linux = Linux ']' 00:09:40.482 07:19:44 unittest.unittest_util -- unit/unittest.sh@147 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/fd_group.c/fd_group_ut 00:09:40.482 00:09:40.482 00:09:40.482 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.482 http://cunit.sourceforge.net/ 00:09:40.482 00:09:40.482 00:09:40.482 Suite: fd_group 00:09:40.482 Test: test_fd_group_basic ...passed 00:09:40.482 Test: test_fd_group_nest_unnest ...passed 00:09:40.482 00:09:40.482 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.482 suites 1 1 n/a 0 0 00:09:40.482 tests 2 2 2 0 0 00:09:40.482 asserts 41 41 41 0 n/a 00:09:40.482 00:09:40.482 Elapsed time = 0.000 seconds 00:09:40.482 00:09:40.482 real 0m0.748s 00:09:40.482 user 0m0.481s 00:09:40.482 sys 0m0.270s 00:09:40.482 07:19:44 unittest.unittest_util -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.482 07:19:44 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:09:40.482 ************************************ 00:09:40.482 END TEST unittest_util 00:09:40.482 ************************************ 00:09:40.482 07:19:44 unittest -- unit/unittest.sh@263 -- # [[ y == y ]] 00:09:40.482 07:19:44 unittest -- unit/unittest.sh@264 -- # run_test unittest_fsdev unittest_fsdev 00:09:40.483 07:19:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.483 07:19:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.483 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:40.483 ************************************ 00:09:40.483 START TEST unittest_fsdev 00:09:40.483 ************************************ 00:09:40.483 07:19:44 unittest.unittest_fsdev -- common/autotest_common.sh@1129 -- # unittest_fsdev 00:09:40.483 07:19:44 unittest.unittest_fsdev -- unit/unittest.sh@152 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/fsdev/fsdev.c/fsdev_ut 00:09:40.483 00:09:40.483 00:09:40.483 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.483 http://cunit.sourceforge.net/ 00:09:40.483 00:09:40.483 00:09:40.483 Suite: fsdev 00:09:40.483 Test: ut_fsdev_test_open_close ...passed 00:09:40.483 Test: ut_fsdev_test_set_opts ...[2024-11-20 07:19:44.374899] fsdev.c: 631:spdk_fsdev_set_opts: *ERROR*: opts cannot be NULL 00:09:40.483 passed 00:09:40.483 Test: ut_fsdev_test_get_io_channel ...[2024-11-20 07:19:44.375141] fsdev.c: 636:spdk_fsdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:09:40.483 passed 00:09:40.483 Test: ut_fsdev_test_mount_ok ...passed 00:09:40.483 Test: ut_fsdev_test_mount_err ...passed 00:09:40.483 Test: ut_fsdev_test_umount ...passed 00:09:40.483 Test: ut_fsdev_test_lookup_ok ...passed 00:09:40.483 Test: ut_fsdev_test_lookup_err ...passed 00:09:40.483 Test: ut_fsdev_test_forget ...passed 00:09:40.483 Test: ut_fsdev_test_getattr ...passed 00:09:40.483 Test: ut_fsdev_test_setattr ...passed 00:09:40.483 Test: ut_fsdev_test_readlink ...passed 00:09:40.483 Test: ut_fsdev_test_symlink ...passed 00:09:40.483 Test: ut_fsdev_test_mknod ...passed 00:09:40.483 Test: ut_fsdev_test_mkdir ...passed 00:09:40.483 Test: ut_fsdev_test_unlink ...passed 00:09:40.483 Test: ut_fsdev_test_rmdir ...passed 00:09:40.483 Test: ut_fsdev_test_rename ...passed 00:09:40.483 Test: ut_fsdev_test_link ...passed 00:09:40.483 Test: ut_fsdev_test_fopen ...passed 00:09:40.483 Test: ut_fsdev_test_read ...passed 00:09:40.483 Test: ut_fsdev_test_write ...passed 00:09:40.483 Test: ut_fsdev_test_statfs ...passed 00:09:40.483 Test: ut_fsdev_test_release ...passed 00:09:40.483 Test: ut_fsdev_test_fsync ...passed 00:09:40.483 Test: ut_fsdev_test_getxattr ...passed 00:09:40.483 Test: ut_fsdev_test_setxattr ...passed 00:09:40.483 Test: ut_fsdev_test_listxattr ...passed 00:09:40.483 Test: ut_fsdev_test_listxattr_get_size ...passed 00:09:40.483 Test: ut_fsdev_test_removexattr ...passed 00:09:40.483 Test: ut_fsdev_test_flush ...passed 00:09:40.483 Test: ut_fsdev_test_opendir ...passed 00:09:40.483 Test: ut_fsdev_test_readdir ...passed 00:09:40.743 Test: ut_fsdev_test_releasedir ...passed 00:09:40.743 Test: ut_fsdev_test_fsyncdir ...passed 00:09:40.743 Test: ut_fsdev_test_flock ...passed 00:09:40.743 Test: ut_fsdev_test_create ...passed 00:09:40.743 Test: ut_fsdev_test_abort ...passed 00:09:40.743 Test: ut_fsdev_test_fallocate ...passed 00:09:40.743 Test: ut_fsdev_test_copy_file_range ...passed[2024-11-20 07:19:44.416294] fsdev.c: 354:fsdev_mgr_unregister_cb: *ERROR*: fsdev IO pool count is 65535 but should be 131070 00:09:40.743 00:09:40.743 00:09:40.743 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.743 suites 1 1 n/a 0 0 00:09:40.743 tests 40 40 40 0 0 00:09:40.743 asserts 2840 2840 2840 0 n/a 00:09:40.743 00:09:40.743 Elapsed time = 0.042 seconds 00:09:40.743 00:09:40.743 real 0m0.099s 00:09:40.743 user 0m0.052s 00:09:40.743 sys 0m0.047s 00:09:40.743 07:19:44 unittest.unittest_fsdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.743 07:19:44 unittest.unittest_fsdev -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 ************************************ 00:09:40.743 END TEST unittest_fsdev 00:09:40.743 ************************************ 00:09:40.743 07:19:44 unittest -- unit/unittest.sh@266 -- # [[ y == y ]] 00:09:40.743 07:19:44 unittest -- unit/unittest.sh@267 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:40.743 07:19:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.743 07:19:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.743 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 ************************************ 00:09:40.743 START TEST unittest_vhost 00:09:40.743 ************************************ 00:09:40.743 07:19:44 unittest.unittest_vhost -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:40.743 00:09:40.743 00:09:40.743 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.743 http://cunit.sourceforge.net/ 00:09:40.743 00:09:40.743 00:09:40.743 Suite: vhost_suite 00:09:40.743 Test: desc_to_iov_test ...[2024-11-20 07:19:44.546610] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:40.743 passed 00:09:40.743 Test: create_controller_test ...[2024-11-20 07:19:44.552576] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:40.743 [2024-11-20 07:19:44.552697] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:40.743 [2024-11-20 07:19:44.552798] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:40.743 [2024-11-20 07:19:44.552867] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:40.743 [2024-11-20 07:19:44.552902] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 125:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:40.744 [2024-11-20 07:19:44.553332] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is too long: some_path/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:09:40.744 [2024-11-20 07:19:44.554290] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 141:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:40.744 passed 00:09:40.744 Test: session_find_by_vid_test ...passed 00:09:40.744 Test: remove_controller_test ...[2024-11-20 07:19:44.556257] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1869:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:40.744 passed 00:09:40.744 Test: vq_avail_ring_get_test ...passed 00:09:40.744 Test: vq_packed_ring_test ...passed 00:09:40.744 Test: vhost_blk_construct_test ...passed 00:09:40.744 00:09:40.744 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.744 suites 1 1 n/a 0 0 00:09:40.744 tests 7 7 7 0 0 00:09:40.744 asserts 147 147 147 0 n/a 00:09:40.744 00:09:40.744 Elapsed time = 0.014 seconds 00:09:40.744 00:09:40.744 real 0m0.070s 00:09:40.744 user 0m0.036s 00:09:40.744 sys 0m0.034s 00:09:40.744 07:19:44 unittest.unittest_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.744 07:19:44 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:09:40.744 ************************************ 00:09:40.744 END TEST unittest_vhost 00:09:40.744 ************************************ 00:09:40.744 07:19:44 unittest -- unit/unittest.sh@269 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:40.744 07:19:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.744 07:19:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.744 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:40.744 ************************************ 00:09:40.744 START TEST unittest_dma 00:09:40.744 ************************************ 00:09:40.744 07:19:44 unittest.unittest_dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:41.004 00:09:41.004 00:09:41.004 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.004 http://cunit.sourceforge.net/ 00:09:41.004 00:09:41.004 00:09:41.004 Suite: dma_suite 00:09:41.004 Test: test_dma ...[2024-11-20 07:19:44.670392] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 60:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:41.004 passed 00:09:41.004 00:09:41.004 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.004 suites 1 1 n/a 0 0 00:09:41.004 tests 1 1 1 0 0 00:09:41.004 asserts 54 54 54 0 n/a 00:09:41.004 00:09:41.004 Elapsed time = 0.001 seconds 00:09:41.004 00:09:41.004 real 0m0.047s 00:09:41.004 user 0m0.018s 00:09:41.004 sys 0m0.030s 00:09:41.004 07:19:44 unittest.unittest_dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.004 07:19:44 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 ************************************ 00:09:41.004 END TEST unittest_dma 00:09:41.004 ************************************ 00:09:41.004 07:19:44 unittest -- unit/unittest.sh@271 -- # run_test unittest_init unittest_init 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 ************************************ 00:09:41.004 START TEST unittest_init 00:09:41.004 ************************************ 00:09:41.004 07:19:44 unittest.unittest_init -- common/autotest_common.sh@1129 -- # unittest_init 00:09:41.004 07:19:44 unittest.unittest_init -- unit/unittest.sh@156 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:41.004 00:09:41.004 00:09:41.004 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.004 http://cunit.sourceforge.net/ 00:09:41.004 00:09:41.004 00:09:41.004 Suite: subsystem_suite 00:09:41.004 Test: subsystem_sort_test_depends_on_single ...passed 00:09:41.004 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:41.004 Test: subsystem_sort_test_missing_dependency ...passed[2024-11-20 07:19:44.788568] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:41.004 [2024-11-20 07:19:44.788798] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:41.004 00:09:41.004 00:09:41.004 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.004 suites 1 1 n/a 0 0 00:09:41.004 tests 3 3 3 0 0 00:09:41.004 asserts 20 20 20 0 n/a 00:09:41.004 00:09:41.004 Elapsed time = 0.000 seconds 00:09:41.004 00:09:41.004 real 0m0.049s 00:09:41.004 user 0m0.030s 00:09:41.004 sys 0m0.020s 00:09:41.004 07:19:44 unittest.unittest_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.004 07:19:44 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 ************************************ 00:09:41.004 END TEST unittest_init 00:09:41.004 ************************************ 00:09:41.004 07:19:44 unittest -- unit/unittest.sh@272 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.004 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 ************************************ 00:09:41.004 START TEST unittest_keyring 00:09:41.004 ************************************ 00:09:41.004 07:19:44 unittest.unittest_keyring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:41.004 00:09:41.004 00:09:41.004 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.004 http://cunit.sourceforge.net/ 00:09:41.004 00:09:41.004 00:09:41.004 Suite: keyring 00:09:41.004 Test: test_keyring_add_remove ...[2024-11-20 07:19:44.904359] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:09:41.004 [2024-11-20 07:19:44.904734] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:09:41.004 [2024-11-20 07:19:44.904800] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 168:spdk_keyring_remove_key: *ERROR*: Key 'key0' is not owned by module 'ut2' 00:09:41.004 [2024-11-20 07:19:44.904855] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key 'key0' does not exist 00:09:41.004 [2024-11-20 07:19:44.904909] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key ':key0' does not exist 00:09:41.004 [2024-11-20 07:19:44.904983] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:09:41.004 passed 00:09:41.004 Test: test_keyring_get_put ...passed 00:09:41.004 00:09:41.004 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.004 suites 1 1 n/a 0 0 00:09:41.004 tests 2 2 2 0 0 00:09:41.004 asserts 46 46 46 0 n/a 00:09:41.004 00:09:41.004 Elapsed time = 0.001 seconds 00:09:41.004 00:09:41.004 real 0m0.048s 00:09:41.004 user 0m0.027s 00:09:41.004 sys 0m0.022s 00:09:41.004 07:19:44 unittest.unittest_keyring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.004 07:19:44 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 ************************************ 00:09:41.004 END TEST unittest_keyring 00:09:41.004 ************************************ 00:09:41.263 07:19:44 unittest -- unit/unittest.sh@274 -- # [[ y == y ]] 00:09:41.263 07:19:44 unittest -- unit/unittest.sh@275 -- # hostname 00:09:41.263 07:19:44 unittest -- unit/unittest.sh@275 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:41.263 geninfo: WARNING: invalid characters removed from testname! 00:10:20.004 07:20:23 unittest -- unit/unittest.sh@276 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:10:25.281 07:20:28 unittest -- unit/unittest.sh@277 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:27.828 07:20:31 unittest -- unit/unittest.sh@278 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:31.117 07:20:34 unittest -- unit/unittest.sh@279 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:33.706 07:20:37 unittest -- unit/unittest.sh@280 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:36.275 07:20:39 unittest -- unit/unittest.sh@281 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:38.815 07:20:42 unittest -- unit/unittest.sh@282 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:38.815 07:20:42 unittest -- unit/unittest.sh@283 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:39.396 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:39.396 Found 338 entries. 00:10:39.396 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:39.396 Writing .css and .png files. 00:10:39.396 Generating output. 00:10:39.396 Processing file include/linux/virtio_ring.h 00:10:39.654 Processing file include/spdk/histogram_data.h 00:10:39.654 Processing file include/spdk/bdev_module.h 00:10:39.654 Processing file include/spdk/fsdev_module.h 00:10:39.654 Processing file include/spdk/nvmf_transport.h 00:10:39.654 Processing file include/spdk/trace.h 00:10:39.654 Processing file include/spdk/util.h 00:10:39.654 Processing file include/spdk/endian.h 00:10:39.654 Processing file include/spdk/nvme.h 00:10:39.654 Processing file include/spdk/nvme_spec.h 00:10:39.654 Processing file include/spdk/mmio.h 00:10:39.654 Processing file include/spdk/thread.h 00:10:39.655 Processing file include/spdk/base64.h 00:10:39.913 Processing file include/spdk_internal/nvme_tcp.h 00:10:39.913 Processing file include/spdk_internal/sgl.h 00:10:39.913 Processing file include/spdk_internal/virtio.h 00:10:39.913 Processing file include/spdk_internal/utf.h 00:10:39.913 Processing file include/spdk_internal/rdma_utils.h 00:10:39.913 Processing file include/spdk_internal/sock.h 00:10:40.171 Processing file lib/accel/accel_sw.c 00:10:40.171 Processing file lib/accel/accel.c 00:10:40.171 Processing file lib/accel/accel_rpc.c 00:10:40.429 Processing file lib/bdev/bdev_zone.c 00:10:40.429 Processing file lib/bdev/bdev.c 00:10:40.429 Processing file lib/bdev/part.c 00:10:40.429 Processing file lib/bdev/scsi_nvme.c 00:10:40.429 Processing file lib/bdev/bdev_rpc.c 00:10:40.686 Processing file lib/blob/zeroes.c 00:10:40.686 Processing file lib/blob/blobstore.c 00:10:40.686 Processing file lib/blob/blob_bs_dev.c 00:10:40.686 Processing file lib/blob/blobstore.h 00:10:40.686 Processing file lib/blob/request.c 00:10:40.686 Processing file lib/blobfs/blobfs.c 00:10:40.687 Processing file lib/blobfs/tree.c 00:10:40.687 Processing file lib/conf/conf.c 00:10:40.687 Processing file lib/dma/dma.c 00:10:40.945 Processing file lib/env_dpdk/pci_idxd.c 00:10:40.945 Processing file lib/env_dpdk/pci_vmd.c 00:10:40.945 Processing file lib/env_dpdk/memory.c 00:10:40.945 Processing file lib/env_dpdk/threads.c 00:10:40.945 Processing file lib/env_dpdk/pci_virtio.c 00:10:40.945 Processing file lib/env_dpdk/init.c 00:10:40.945 Processing file lib/env_dpdk/env.c 00:10:40.945 Processing file lib/env_dpdk/pci_ioat.c 00:10:40.945 Processing file lib/env_dpdk/pci_event.c 00:10:40.945 Processing file lib/env_dpdk/pci_dpdk.c 00:10:40.945 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:40.945 Processing file lib/env_dpdk/sigbus_handler.c 00:10:40.945 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:40.945 Processing file lib/env_dpdk/pci.c 00:10:41.203 Processing file lib/event/app_rpc.c 00:10:41.203 Processing file lib/event/scheduler_static.c 00:10:41.203 Processing file lib/event/app.c 00:10:41.203 Processing file lib/event/log_rpc.c 00:10:41.203 Processing file lib/event/reactor.c 00:10:41.203 Processing file lib/fsdev/fsdev_rpc.c 00:10:41.203 Processing file lib/fsdev/fsdev.c 00:10:41.203 Processing file lib/fsdev/fsdev_io.c 00:10:41.784 Processing file lib/ftl/ftl_sb.c 00:10:41.784 Processing file lib/ftl/ftl_nv_cache.h 00:10:41.784 Processing file lib/ftl/ftl_writer.h 00:10:41.784 Processing file lib/ftl/ftl_l2p_cache.c 00:10:41.784 Processing file lib/ftl/ftl_core.h 00:10:41.784 Processing file lib/ftl/ftl_io.c 00:10:41.784 Processing file lib/ftl/ftl_band.h 00:10:41.784 Processing file lib/ftl/ftl_writer.c 00:10:41.784 Processing file lib/ftl/ftl_l2p.c 00:10:41.784 Processing file lib/ftl/ftl_l2p_flat.c 00:10:41.784 Processing file lib/ftl/ftl_nv_cache.c 00:10:41.784 Processing file lib/ftl/ftl_band.c 00:10:41.784 Processing file lib/ftl/ftl_band_ops.c 00:10:41.784 Processing file lib/ftl/ftl_trace.c 00:10:41.784 Processing file lib/ftl/ftl_reloc.c 00:10:41.784 Processing file lib/ftl/ftl_init.c 00:10:41.784 Processing file lib/ftl/ftl_debug.h 00:10:41.784 Processing file lib/ftl/ftl_layout.c 00:10:41.784 Processing file lib/ftl/ftl_p2l.c 00:10:41.784 Processing file lib/ftl/ftl_core.c 00:10:41.784 Processing file lib/ftl/ftl_rq.c 00:10:41.784 Processing file lib/ftl/ftl_io.h 00:10:41.784 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:41.784 Processing file lib/ftl/ftl_p2l_log.c 00:10:41.784 Processing file lib/ftl/ftl_debug.c 00:10:41.784 Processing file lib/ftl/base/ftl_base_dev.c 00:10:41.784 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:42.075 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:42.075 Processing file lib/ftl/nvc/ftl_nvc_bdev_non_vss.c 00:10:42.075 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:42.075 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:42.075 Processing file lib/ftl/nvc/ftl_nvc_bdev_common.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:42.333 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:42.333 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:42.333 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:42.333 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:42.333 Processing file lib/ftl/utils/ftl_mempool.c 00:10:42.333 Processing file lib/ftl/utils/ftl_property.h 00:10:42.333 Processing file lib/ftl/utils/ftl_md.c 00:10:42.333 Processing file lib/ftl/utils/ftl_df.h 00:10:42.333 Processing file lib/ftl/utils/ftl_property.c 00:10:42.333 Processing file lib/ftl/utils/ftl_conf.c 00:10:42.333 Processing file lib/fuse_dispatcher/fuse_dispatcher.c 00:10:42.592 Processing file lib/idxd/idxd_user.c 00:10:42.592 Processing file lib/idxd/idxd_internal.h 00:10:42.592 Processing file lib/idxd/idxd_kernel.c 00:10:42.592 Processing file lib/idxd/idxd.c 00:10:42.592 Processing file lib/init/subsystem_rpc.c 00:10:42.593 Processing file lib/init/json_config.c 00:10:42.593 Processing file lib/init/subsystem.c 00:10:42.593 Processing file lib/init/rpc.c 00:10:42.593 Processing file lib/ioat/ioat.c 00:10:42.593 Processing file lib/ioat/ioat_internal.h 00:10:43.159 Processing file lib/iscsi/init_grp.c 00:10:43.159 Processing file lib/iscsi/param.c 00:10:43.159 Processing file lib/iscsi/tgt_node.c 00:10:43.159 Processing file lib/iscsi/iscsi.h 00:10:43.159 Processing file lib/iscsi/task.h 00:10:43.159 Processing file lib/iscsi/conn.c 00:10:43.159 Processing file lib/iscsi/iscsi.c 00:10:43.159 Processing file lib/iscsi/portal_grp.c 00:10:43.159 Processing file lib/iscsi/iscsi_subsystem.c 00:10:43.159 Processing file lib/iscsi/iscsi_rpc.c 00:10:43.159 Processing file lib/iscsi/task.c 00:10:43.159 Processing file lib/json/json_write.c 00:10:43.159 Processing file lib/json/json_parse.c 00:10:43.159 Processing file lib/json/json_util.c 00:10:43.159 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:43.159 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:43.159 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:43.159 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:43.159 Processing file lib/keyring/keyring_rpc.c 00:10:43.159 Processing file lib/keyring/keyring.c 00:10:43.159 Processing file lib/log/log.c 00:10:43.159 Processing file lib/log/log_deprecated.c 00:10:43.159 Processing file lib/log/log_flags.c 00:10:43.418 Processing file lib/lvol/lvol.c 00:10:43.418 Processing file lib/nbd/nbd.c 00:10:43.418 Processing file lib/nbd/nbd_rpc.c 00:10:43.418 Processing file lib/notify/notify_rpc.c 00:10:43.418 Processing file lib/notify/notify.c 00:10:44.353 Processing file lib/nvme/nvme_poll_group.c 00:10:44.353 Processing file lib/nvme/nvme_ctrlr.c 00:10:44.353 Processing file lib/nvme/nvme_qpair.c 00:10:44.353 Processing file lib/nvme/nvme_auth.c 00:10:44.353 Processing file lib/nvme/nvme_pcie.c 00:10:44.353 Processing file lib/nvme/nvme_pcie_common.c 00:10:44.353 Processing file lib/nvme/nvme_io_msg.c 00:10:44.353 Processing file lib/nvme/nvme_zns.c 00:10:44.353 Processing file lib/nvme/nvme.c 00:10:44.353 Processing file lib/nvme/nvme_rdma.c 00:10:44.353 Processing file lib/nvme/nvme_pcie_internal.h 00:10:44.353 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:44.353 Processing file lib/nvme/nvme_ns.c 00:10:44.353 Processing file lib/nvme/nvme_opal.c 00:10:44.353 Processing file lib/nvme/nvme_discovery.c 00:10:44.353 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:44.353 Processing file lib/nvme/nvme_ns_cmd.c 00:10:44.353 Processing file lib/nvme/nvme_internal.h 00:10:44.353 Processing file lib/nvme/nvme_cuse.c 00:10:44.353 Processing file lib/nvme/nvme_tcp.c 00:10:44.353 Processing file lib/nvme/nvme_quirks.c 00:10:44.353 Processing file lib/nvme/nvme_fabric.c 00:10:44.353 Processing file lib/nvme/nvme_transport.c 00:10:44.353 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:44.610 Processing file lib/nvmf/ctrlr_discovery.c 00:10:44.610 Processing file lib/nvmf/stubs.c 00:10:44.610 Processing file lib/nvmf/nvmf_internal.h 00:10:44.610 Processing file lib/nvmf/nvmf.c 00:10:44.610 Processing file lib/nvmf/rdma.c 00:10:44.610 Processing file lib/nvmf/nvmf_rpc.c 00:10:44.610 Processing file lib/nvmf/auth.c 00:10:44.610 Processing file lib/nvmf/ctrlr.c 00:10:44.610 Processing file lib/nvmf/transport.c 00:10:44.610 Processing file lib/nvmf/ctrlr_bdev.c 00:10:44.610 Processing file lib/nvmf/tcp.c 00:10:44.610 Processing file lib/nvmf/subsystem.c 00:10:44.610 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:10:44.610 Processing file lib/rdma_provider/common.c 00:10:44.610 Processing file lib/rdma_utils/rdma_utils.c 00:10:44.610 Processing file lib/rpc/rpc.c 00:10:44.867 Processing file lib/scsi/task.c 00:10:44.867 Processing file lib/scsi/scsi_pr.c 00:10:44.867 Processing file lib/scsi/scsi_bdev.c 00:10:44.867 Processing file lib/scsi/dev.c 00:10:44.867 Processing file lib/scsi/port.c 00:10:44.867 Processing file lib/scsi/scsi.c 00:10:44.867 Processing file lib/scsi/lun.c 00:10:44.867 Processing file lib/scsi/scsi_rpc.c 00:10:44.867 Processing file lib/sock/sock.c 00:10:44.867 Processing file lib/sock/sock_rpc.c 00:10:45.130 Processing file lib/thread/iobuf.c 00:10:45.130 Processing file lib/thread/thread.c 00:10:45.130 Processing file lib/trace/trace_flags.c 00:10:45.130 Processing file lib/trace/trace.c 00:10:45.130 Processing file lib/trace/trace_rpc.c 00:10:45.130 Processing file lib/trace_parser/trace.cpp 00:10:45.130 Processing file lib/ublk/ublk_rpc.c 00:10:45.130 Processing file lib/ublk/ublk.c 00:10:45.130 Processing file lib/ut/ut.c 00:10:45.130 Processing file lib/ut_mock/mock.c 00:10:45.387 Processing file lib/util/zipf.c 00:10:45.387 Processing file lib/util/base64.c 00:10:45.387 Processing file lib/util/crc16.c 00:10:45.387 Processing file lib/util/net.c 00:10:45.387 Processing file lib/util/xor.c 00:10:45.387 Processing file lib/util/fd.c 00:10:45.387 Processing file lib/util/crc64.c 00:10:45.387 Processing file lib/util/crc32_ieee.c 00:10:45.387 Processing file lib/util/crc32.c 00:10:45.387 Processing file lib/util/strerror_tls.c 00:10:45.387 Processing file lib/util/md5.c 00:10:45.387 Processing file lib/util/bit_array.c 00:10:45.387 Processing file lib/util/pipe.c 00:10:45.387 Processing file lib/util/file.c 00:10:45.387 Processing file lib/util/crc32c.c 00:10:45.387 Processing file lib/util/string.c 00:10:45.387 Processing file lib/util/fd_group.c 00:10:45.387 Processing file lib/util/hexlify.c 00:10:45.387 Processing file lib/util/math.c 00:10:45.387 Processing file lib/util/cpuset.c 00:10:45.387 Processing file lib/util/iov.c 00:10:45.387 Processing file lib/util/uuid.c 00:10:45.387 Processing file lib/util/dif.c 00:10:45.644 Processing file lib/vfio_user/host/vfio_user.c 00:10:45.645 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:45.645 Processing file lib/vhost/vhost_internal.h 00:10:45.645 Processing file lib/vhost/vhost_rpc.c 00:10:45.645 Processing file lib/vhost/rte_vhost_user.c 00:10:45.645 Processing file lib/vhost/vhost.c 00:10:45.645 Processing file lib/vhost/vhost_blk.c 00:10:45.645 Processing file lib/vhost/vhost_scsi.c 00:10:45.903 Processing file lib/virtio/virtio_vfio_user.c 00:10:45.903 Processing file lib/virtio/virtio_pci.c 00:10:45.903 Processing file lib/virtio/virtio_vhost_user.c 00:10:45.903 Processing file lib/virtio/virtio.c 00:10:45.903 Processing file lib/vmd/led.c 00:10:45.903 Processing file lib/vmd/vmd.c 00:10:45.903 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:45.903 Processing file module/accel/dsa/accel_dsa.c 00:10:45.903 Processing file module/accel/error/accel_error_rpc.c 00:10:45.903 Processing file module/accel/error/accel_error.c 00:10:46.162 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:46.162 Processing file module/accel/iaa/accel_iaa.c 00:10:46.162 Processing file module/accel/ioat/accel_ioat.c 00:10:46.162 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:46.162 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:46.162 Processing file module/bdev/aio/bdev_aio.c 00:10:46.162 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:46.162 Processing file module/bdev/delay/vbdev_delay.c 00:10:46.420 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:46.420 Processing file module/bdev/error/vbdev_error.c 00:10:46.420 Processing file module/bdev/ftl/bdev_ftl.c 00:10:46.420 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:46.420 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:46.420 Processing file module/bdev/gpt/gpt.c 00:10:46.420 Processing file module/bdev/gpt/gpt.h 00:10:46.420 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:46.420 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:46.420 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:46.420 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:46.678 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:46.678 Processing file module/bdev/malloc/bdev_malloc.c 00:10:46.678 Processing file module/bdev/null/bdev_null_rpc.c 00:10:46.678 Processing file module/bdev/null/bdev_null.c 00:10:46.936 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:46.936 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:46.936 Processing file module/bdev/nvme/vbdev_opal.c 00:10:46.936 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:46.936 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:46.936 Processing file module/bdev/nvme/nvme_rpc.c 00:10:46.936 Processing file module/bdev/nvme/bdev_nvme.c 00:10:46.936 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:46.936 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:47.194 Processing file module/bdev/raid/raid1.c 00:10:47.194 Processing file module/bdev/raid/bdev_raid.h 00:10:47.194 Processing file module/bdev/raid/raid0.c 00:10:47.194 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:47.194 Processing file module/bdev/raid/bdev_raid.c 00:10:47.194 Processing file module/bdev/raid/concat.c 00:10:47.194 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:47.194 Processing file module/bdev/split/vbdev_split.c 00:10:47.194 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:47.194 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:47.194 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:47.194 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:47.454 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:47.454 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:47.454 Processing file module/blob/bdev/blob_bdev.c 00:10:47.454 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:47.454 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:47.454 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:47.713 Processing file module/event/subsystems/accel/accel.c 00:10:47.713 Processing file module/event/subsystems/bdev/bdev.c 00:10:47.713 Processing file module/event/subsystems/fsdev/fsdev.c 00:10:47.713 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:47.713 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:47.713 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:47.713 Processing file module/event/subsystems/keyring/keyring.c 00:10:47.973 Processing file module/event/subsystems/nbd/nbd.c 00:10:47.973 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:47.973 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:47.973 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:47.973 Processing file module/event/subsystems/scsi/scsi.c 00:10:47.973 Processing file module/event/subsystems/sock/sock.c 00:10:48.232 Processing file module/event/subsystems/ublk/ublk.c 00:10:48.232 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:48.232 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:48.232 Processing file module/event/subsystems/vmd/vmd.c 00:10:48.232 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:48.232 Processing file module/fsdev/aio/linux_aio_mgr.c 00:10:48.232 Processing file module/fsdev/aio/fsdev_aio.c 00:10:48.232 Processing file module/fsdev/aio/fsdev_aio_rpc.c 00:10:48.509 Processing file module/keyring/file/keyring_rpc.c 00:10:48.509 Processing file module/keyring/file/keyring.c 00:10:48.509 Processing file module/keyring/linux/keyring.c 00:10:48.509 Processing file module/keyring/linux/keyring_rpc.c 00:10:48.509 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:48.509 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:48.509 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:48.822 Processing file module/sock/posix/posix.c 00:10:48.822 Writing directory view page. 00:10:48.822 Overall coverage rate: 00:10:48.822 lines......: 37.2% (42182 of 113363 lines) 00:10:48.822 functions..: 40.9% (3905 of 9550 functions) 00:10:48.822 Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:48.822 00:10:48.822 00:10:48.822 ===================== 00:10:48.822 All unit tests passed 00:10:48.822 ===================== 00:10:48.822 00:10:48.822 00:10:48.822 07:20:52 unittest -- unit/unittest.sh@284 -- # echo 'Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage' 00:10:48.822 07:20:52 unittest -- unit/unittest.sh@287 -- # set +x 00:10:48.822 ************************************ 00:10:48.822 END TEST unittest 00:10:48.822 ************************************ 00:10:48.822 00:10:48.822 real 2m39.711s 00:10:48.822 user 2m16.678s 00:10:48.822 sys 0m16.439s 00:10:48.822 07:20:52 unittest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.822 07:20:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:10:48.822 07:20:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:48.822 07:20:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:48.822 07:20:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:48.822 07:20:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:48.822 07:20:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.822 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:10:48.822 07:20:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:48.822 07:20:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:48.822 07:20:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.822 07:20:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.822 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:10:48.822 ************************************ 00:10:48.822 START TEST env 00:10:48.822 ************************************ 00:10:48.822 07:20:52 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:48.823 * Looking for test storage... 00:10:48.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:48.823 07:20:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.823 07:20:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.823 07:20:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.107 07:20:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.107 07:20:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.107 07:20:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.107 07:20:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.107 07:20:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.107 07:20:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.107 07:20:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.107 07:20:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.107 07:20:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.107 07:20:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.107 07:20:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.107 07:20:52 env -- scripts/common.sh@344 -- # case "$op" in 00:10:49.107 07:20:52 env -- scripts/common.sh@345 -- # : 1 00:10:49.107 07:20:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.107 07:20:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.107 07:20:52 env -- scripts/common.sh@365 -- # decimal 1 00:10:49.107 07:20:52 env -- scripts/common.sh@353 -- # local d=1 00:10:49.107 07:20:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.107 07:20:52 env -- scripts/common.sh@355 -- # echo 1 00:10:49.107 07:20:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.107 07:20:52 env -- scripts/common.sh@366 -- # decimal 2 00:10:49.107 07:20:52 env -- scripts/common.sh@353 -- # local d=2 00:10:49.107 07:20:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.107 07:20:52 env -- scripts/common.sh@355 -- # echo 2 00:10:49.107 07:20:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.107 07:20:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.107 07:20:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.107 07:20:52 env -- scripts/common.sh@368 -- # return 0 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.107 --rc genhtml_branch_coverage=1 00:10:49.107 --rc genhtml_function_coverage=1 00:10:49.107 --rc genhtml_legend=1 00:10:49.107 --rc geninfo_all_blocks=1 00:10:49.107 --rc geninfo_unexecuted_blocks=1 00:10:49.107 00:10:49.107 ' 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.107 --rc genhtml_branch_coverage=1 00:10:49.107 --rc genhtml_function_coverage=1 00:10:49.107 --rc genhtml_legend=1 00:10:49.107 --rc geninfo_all_blocks=1 00:10:49.107 --rc geninfo_unexecuted_blocks=1 00:10:49.107 00:10:49.107 ' 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.107 --rc genhtml_branch_coverage=1 00:10:49.107 --rc genhtml_function_coverage=1 00:10:49.107 --rc genhtml_legend=1 00:10:49.107 --rc geninfo_all_blocks=1 00:10:49.107 --rc geninfo_unexecuted_blocks=1 00:10:49.107 00:10:49.107 ' 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.107 --rc genhtml_branch_coverage=1 00:10:49.107 --rc genhtml_function_coverage=1 00:10:49.107 --rc genhtml_legend=1 00:10:49.107 --rc geninfo_all_blocks=1 00:10:49.107 --rc geninfo_unexecuted_blocks=1 00:10:49.107 00:10:49.107 ' 00:10:49.107 07:20:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.107 07:20:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.107 07:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:10:49.107 ************************************ 00:10:49.107 START TEST env_memory 00:10:49.107 ************************************ 00:10:49.107 07:20:52 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:49.107 00:10:49.107 00:10:49.107 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.107 http://cunit.sourceforge.net/ 00:10:49.107 00:10:49.107 00:10:49.107 Suite: memory 00:10:49.107 Test: alloc and free memory map ...[2024-11-20 07:20:52.858128] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:49.107 passed 00:10:49.107 Test: mem map translation ...[2024-11-20 07:20:52.911296] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:49.107 [2024-11-20 07:20:52.911429] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:49.107 [2024-11-20 07:20:52.911549] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:49.107 [2024-11-20 07:20:52.911617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:49.107 passed 00:10:49.107 Test: mem map registration ...[2024-11-20 07:20:52.991970] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:49.107 [2024-11-20 07:20:52.992077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:49.107 passed 00:10:49.368 Test: mem map adjacent registrations ...passed 00:10:49.368 00:10:49.368 Run Summary: Type Total Ran Passed Failed Inactive 00:10:49.368 suites 1 1 n/a 0 0 00:10:49.368 tests 4 4 4 0 0 00:10:49.368 asserts 152 152 152 0 n/a 00:10:49.368 00:10:49.368 Elapsed time = 0.288 seconds 00:10:49.368 00:10:49.368 real 0m0.334s 00:10:49.368 user 0m0.305s 00:10:49.368 sys 0m0.029s 00:10:49.368 07:20:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.368 07:20:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:49.368 ************************************ 00:10:49.368 END TEST env_memory 00:10:49.368 ************************************ 00:10:49.368 07:20:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:49.368 07:20:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.368 07:20:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.368 07:20:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:49.368 ************************************ 00:10:49.368 START TEST env_vtophys 00:10:49.368 ************************************ 00:10:49.368 07:20:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:49.368 EAL: lib.eal log level changed from notice to debug 00:10:49.368 EAL: Detected lcore 0 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 1 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 2 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 3 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 4 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 5 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 6 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 7 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 8 as core 0 on socket 0 00:10:49.368 EAL: Detected lcore 9 as core 0 on socket 0 00:10:49.368 EAL: Maximum logical cores by configuration: 128 00:10:49.368 EAL: Detected CPU lcores: 10 00:10:49.368 EAL: Detected NUMA nodes: 1 00:10:49.368 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:49.368 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:49.368 EAL: Checking presence of .so 'librte_eal.so' 00:10:49.368 EAL: Detected static linkage of DPDK 00:10:49.368 EAL: No shared files mode enabled, IPC will be disabled 00:10:49.368 EAL: Selected IOVA mode 'PA' 00:10:49.368 EAL: Probing VFIO support... 00:10:49.368 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:49.368 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:49.368 EAL: Ask a virtual area of 0x2e000 bytes 00:10:49.368 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:49.368 EAL: Setting up physically contiguous memory... 00:10:49.368 EAL: Setting maximum number of open files to 1048576 00:10:49.368 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:49.368 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:49.368 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.368 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:49.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.368 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.368 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:49.368 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:49.368 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.368 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:49.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.368 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.368 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:49.368 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:49.368 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.368 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:49.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.368 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.368 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:49.368 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:49.368 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.368 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:49.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.368 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.368 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:49.368 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:49.368 EAL: Hugepages will be freed exactly as allocated. 00:10:49.368 EAL: No shared files mode enabled, IPC is disabled 00:10:49.368 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: TSC frequency is ~2290000 KHz 00:10:49.627 EAL: Main lcore 0 is ready (tid=7a66ba0a1a80;cpuset=[0]) 00:10:49.627 EAL: Trying to obtain current memory policy. 00:10:49.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.627 EAL: Restoring previous memory policy: 0 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was expanded by 2MB 00:10:49.627 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:49.627 EAL: Mem event callback 'spdk:(nil)' registered 00:10:49.627 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:49.627 00:10:49.627 00:10:49.627 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.627 http://cunit.sourceforge.net/ 00:10:49.627 00:10:49.627 00:10:49.627 Suite: components_suite 00:10:49.627 Test: vtophys_malloc_test ...passed 00:10:49.627 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:49.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.627 EAL: Restoring previous memory policy: 4 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was expanded by 4MB 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was shrunk by 4MB 00:10:49.627 EAL: Trying to obtain current memory policy. 00:10:49.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.627 EAL: Restoring previous memory policy: 4 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was expanded by 6MB 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was shrunk by 6MB 00:10:49.627 EAL: Trying to obtain current memory policy. 00:10:49.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.627 EAL: Restoring previous memory policy: 4 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was expanded by 10MB 00:10:49.627 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.627 EAL: request: mp_malloc_sync 00:10:49.627 EAL: No shared files mode enabled, IPC is disabled 00:10:49.627 EAL: Heap on socket 0 was shrunk by 10MB 00:10:49.886 EAL: Trying to obtain current memory policy. 00:10:49.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.886 EAL: Restoring previous memory policy: 4 00:10:49.886 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.886 EAL: request: mp_malloc_sync 00:10:49.886 EAL: No shared files mode enabled, IPC is disabled 00:10:49.886 EAL: Heap on socket 0 was expanded by 18MB 00:10:49.886 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.886 EAL: request: mp_malloc_sync 00:10:49.886 EAL: No shared files mode enabled, IPC is disabled 00:10:49.886 EAL: Heap on socket 0 was shrunk by 18MB 00:10:49.886 EAL: Trying to obtain current memory policy. 00:10:49.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.886 EAL: Restoring previous memory policy: 4 00:10:49.886 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.886 EAL: request: mp_malloc_sync 00:10:49.886 EAL: No shared files mode enabled, IPC is disabled 00:10:49.886 EAL: Heap on socket 0 was expanded by 34MB 00:10:49.886 EAL: Calling mem event callback 'spdk:(nil)' 00:10:49.886 EAL: request: mp_malloc_sync 00:10:49.886 EAL: No shared files mode enabled, IPC is disabled 00:10:49.886 EAL: Heap on socket 0 was shrunk by 34MB 00:10:49.886 EAL: Trying to obtain current memory policy. 00:10:49.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.146 EAL: Restoring previous memory policy: 4 00:10:50.146 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.146 EAL: request: mp_malloc_sync 00:10:50.146 EAL: No shared files mode enabled, IPC is disabled 00:10:50.146 EAL: Heap on socket 0 was expanded by 66MB 00:10:50.146 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.146 EAL: request: mp_malloc_sync 00:10:50.146 EAL: No shared files mode enabled, IPC is disabled 00:10:50.146 EAL: Heap on socket 0 was shrunk by 66MB 00:10:50.454 EAL: Trying to obtain current memory policy. 00:10:50.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.454 EAL: Restoring previous memory policy: 4 00:10:50.454 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.454 EAL: request: mp_malloc_sync 00:10:50.454 EAL: No shared files mode enabled, IPC is disabled 00:10:50.454 EAL: Heap on socket 0 was expanded by 130MB 00:10:50.713 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.713 EAL: request: mp_malloc_sync 00:10:50.713 EAL: No shared files mode enabled, IPC is disabled 00:10:50.713 EAL: Heap on socket 0 was shrunk by 130MB 00:10:50.972 EAL: Trying to obtain current memory policy. 00:10:50.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.972 EAL: Restoring previous memory policy: 4 00:10:50.972 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.972 EAL: request: mp_malloc_sync 00:10:50.972 EAL: No shared files mode enabled, IPC is disabled 00:10:50.972 EAL: Heap on socket 0 was expanded by 258MB 00:10:51.911 EAL: Calling mem event callback 'spdk:(nil)' 00:10:51.911 EAL: request: mp_malloc_sync 00:10:51.911 EAL: No shared files mode enabled, IPC is disabled 00:10:51.911 EAL: Heap on socket 0 was shrunk by 258MB 00:10:52.484 EAL: Trying to obtain current memory policy. 00:10:52.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.484 EAL: Restoring previous memory policy: 4 00:10:52.484 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.484 EAL: request: mp_malloc_sync 00:10:52.484 EAL: No shared files mode enabled, IPC is disabled 00:10:52.484 EAL: Heap on socket 0 was expanded by 514MB 00:10:53.871 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.871 EAL: request: mp_malloc_sync 00:10:53.871 EAL: No shared files mode enabled, IPC is disabled 00:10:53.871 EAL: Heap on socket 0 was shrunk by 514MB 00:10:55.252 EAL: Trying to obtain current memory policy. 00:10:55.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:55.252 EAL: Restoring previous memory policy: 4 00:10:55.252 EAL: Calling mem event callback 'spdk:(nil)' 00:10:55.252 EAL: request: mp_malloc_sync 00:10:55.252 EAL: No shared files mode enabled, IPC is disabled 00:10:55.252 EAL: Heap on socket 0 was expanded by 1026MB 00:10:58.537 EAL: Calling mem event callback 'spdk:(nil)' 00:10:58.537 EAL: request: mp_malloc_sync 00:10:58.537 EAL: No shared files mode enabled, IPC is disabled 00:10:58.537 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:59.985 passed 00:10:59.985 00:10:59.985 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.985 suites 1 1 n/a 0 0 00:10:59.985 tests 2 2 2 0 0 00:10:59.985 asserts 5446 5446 5446 0 n/a 00:10:59.985 00:10:59.985 Elapsed time = 10.276 seconds 00:10:59.985 EAL: Calling mem event callback 'spdk:(nil)' 00:10:59.985 EAL: request: mp_malloc_sync 00:10:59.985 EAL: No shared files mode enabled, IPC is disabled 00:10:59.985 EAL: Heap on socket 0 was shrunk by 2MB 00:10:59.985 EAL: No shared files mode enabled, IPC is disabled 00:10:59.985 EAL: No shared files mode enabled, IPC is disabled 00:10:59.985 EAL: No shared files mode enabled, IPC is disabled 00:10:59.985 00:10:59.985 real 0m10.583s 00:10:59.985 user 0m9.537s 00:10:59.985 sys 0m0.914s 00:10:59.985 07:21:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.985 ************************************ 00:10:59.985 END TEST env_vtophys 00:10:59.985 ************************************ 00:10:59.985 07:21:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:59.985 07:21:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:59.985 07:21:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.985 07:21:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.985 07:21:03 env -- common/autotest_common.sh@10 -- # set +x 00:10:59.985 ************************************ 00:10:59.985 START TEST env_pci 00:10:59.985 ************************************ 00:10:59.985 07:21:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:59.985 00:10:59.985 00:10:59.985 CUnit - A unit testing framework for C - Version 2.1-3 00:10:59.985 http://cunit.sourceforge.net/ 00:10:59.985 00:10:59.985 00:10:59.985 Suite: pci 00:10:59.985 Test: pci_hook ...[2024-11-20 07:21:03.869511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 66404 has claimed it 00:11:00.245 passed 00:11:00.245 00:11:00.245 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.245 suites 1 1 n/a 0 0 00:11:00.245 tests 1 1 1 0 0 00:11:00.245 asserts 25 25 25 0 n/a 00:11:00.245 00:11:00.245 Elapsed time = 0.011 seconds 00:11:00.245 EAL: Cannot find device (10000:00:01.0) 00:11:00.245 EAL: Failed to attach device on primary process 00:11:00.245 00:11:00.245 real 0m0.108s 00:11:00.245 user 0m0.047s 00:11:00.245 sys 0m0.061s 00:11:00.245 07:21:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.245 07:21:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:00.245 ************************************ 00:11:00.245 END TEST env_pci 00:11:00.245 ************************************ 00:11:00.245 07:21:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:00.245 07:21:03 env -- env/env.sh@15 -- # uname 00:11:00.245 07:21:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:00.245 07:21:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:00.245 07:21:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:00.245 07:21:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.245 07:21:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.245 07:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:00.245 ************************************ 00:11:00.245 START TEST env_dpdk_post_init 00:11:00.245 ************************************ 00:11:00.245 07:21:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:00.245 EAL: Detected CPU lcores: 10 00:11:00.245 EAL: Detected NUMA nodes: 1 00:11:00.245 EAL: Detected static linkage of DPDK 00:11:00.245 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:00.245 EAL: Selected IOVA mode 'PA' 00:11:00.504 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.504 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:00.504 Starting DPDK initialization... 00:11:00.504 Starting SPDK post initialization... 00:11:00.504 SPDK NVMe probe 00:11:00.504 Attaching to 0000:00:10.0 00:11:00.504 Attached to 0000:00:10.0 00:11:00.504 Cleaning up... 00:11:00.504 00:11:00.504 real 0m0.289s 00:11:00.504 user 0m0.093s 00:11:00.504 sys 0m0.097s 00:11:00.504 07:21:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.504 07:21:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:00.504 ************************************ 00:11:00.504 END TEST env_dpdk_post_init 00:11:00.504 ************************************ 00:11:00.504 07:21:04 env -- env/env.sh@26 -- # uname 00:11:00.504 07:21:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:00.504 07:21:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.504 07:21:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.504 07:21:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.504 07:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:00.504 ************************************ 00:11:00.504 START TEST env_mem_callbacks 00:11:00.504 ************************************ 00:11:00.504 07:21:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.504 EAL: Detected CPU lcores: 10 00:11:00.504 EAL: Detected NUMA nodes: 1 00:11:00.504 EAL: Detected static linkage of DPDK 00:11:00.763 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:00.763 EAL: Selected IOVA mode 'PA' 00:11:00.763 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.763 00:11:00.763 00:11:00.763 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.763 http://cunit.sourceforge.net/ 00:11:00.763 00:11:00.763 00:11:00.763 Suite: memory 00:11:00.763 Test: test ... 00:11:00.763 register 0x200000200000 2097152 00:11:00.763 malloc 3145728 00:11:00.763 register 0x200000400000 4194304 00:11:00.763 buf 0x2000004fffc0 len 3145728 PASSED 00:11:00.763 malloc 64 00:11:00.763 buf 0x2000004ffec0 len 64 PASSED 00:11:00.763 malloc 4194304 00:11:00.763 register 0x200000800000 6291456 00:11:00.763 buf 0x2000009fffc0 len 4194304 PASSED 00:11:00.763 free 0x2000004fffc0 3145728 00:11:00.763 free 0x2000004ffec0 64 00:11:00.763 unregister 0x200000400000 4194304 PASSED 00:11:00.763 free 0x2000009fffc0 4194304 00:11:00.763 unregister 0x200000800000 6291456 PASSED 00:11:00.763 malloc 8388608 00:11:00.763 register 0x200000400000 10485760 00:11:00.763 buf 0x2000005fffc0 len 8388608 PASSED 00:11:00.763 free 0x2000005fffc0 8388608 00:11:00.763 unregister 0x200000400000 10485760 PASSED 00:11:00.763 passed 00:11:00.763 00:11:00.763 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.763 suites 1 1 n/a 0 0 00:11:00.763 tests 1 1 1 0 0 00:11:00.763 asserts 15 15 15 0 n/a 00:11:00.763 00:11:00.763 Elapsed time = 0.097 seconds 00:11:00.763 00:11:00.763 real 0m0.303s 00:11:00.763 user 0m0.125s 00:11:00.763 sys 0m0.078s 00:11:00.763 07:21:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.763 07:21:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:00.763 ************************************ 00:11:00.763 END TEST env_mem_callbacks 00:11:00.763 ************************************ 00:11:01.022 00:11:01.022 real 0m12.135s 00:11:01.022 user 0m10.306s 00:11:01.022 sys 0m1.528s 00:11:01.022 07:21:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.022 07:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:01.022 ************************************ 00:11:01.022 END TEST env 00:11:01.022 ************************************ 00:11:01.022 07:21:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:01.022 07:21:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.022 07:21:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.022 07:21:04 -- common/autotest_common.sh@10 -- # set +x 00:11:01.022 ************************************ 00:11:01.022 START TEST rpc 00:11:01.022 ************************************ 00:11:01.022 07:21:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:01.022 * Looking for test storage... 00:11:01.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:01.022 07:21:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.022 07:21:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.022 07:21:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.281 07:21:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.281 07:21:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.281 07:21:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.281 07:21:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.281 07:21:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.281 07:21:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.281 07:21:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:01.281 07:21:04 rpc -- scripts/common.sh@345 -- # : 1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.281 07:21:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.281 07:21:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@353 -- # local d=1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.281 07:21:04 rpc -- scripts/common.sh@355 -- # echo 1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.281 07:21:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@353 -- # local d=2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.281 07:21:04 rpc -- scripts/common.sh@355 -- # echo 2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.281 07:21:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.281 07:21:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.281 07:21:04 rpc -- scripts/common.sh@368 -- # return 0 00:11:01.281 07:21:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.281 07:21:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.281 --rc genhtml_branch_coverage=1 00:11:01.281 --rc genhtml_function_coverage=1 00:11:01.281 --rc genhtml_legend=1 00:11:01.281 --rc geninfo_all_blocks=1 00:11:01.281 --rc geninfo_unexecuted_blocks=1 00:11:01.281 00:11:01.281 ' 00:11:01.281 07:21:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.281 --rc genhtml_branch_coverage=1 00:11:01.281 --rc genhtml_function_coverage=1 00:11:01.281 --rc genhtml_legend=1 00:11:01.281 --rc geninfo_all_blocks=1 00:11:01.281 --rc geninfo_unexecuted_blocks=1 00:11:01.282 00:11:01.282 ' 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.282 --rc genhtml_branch_coverage=1 00:11:01.282 --rc genhtml_function_coverage=1 00:11:01.282 --rc genhtml_legend=1 00:11:01.282 --rc geninfo_all_blocks=1 00:11:01.282 --rc geninfo_unexecuted_blocks=1 00:11:01.282 00:11:01.282 ' 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.282 --rc genhtml_branch_coverage=1 00:11:01.282 --rc genhtml_function_coverage=1 00:11:01.282 --rc genhtml_legend=1 00:11:01.282 --rc geninfo_all_blocks=1 00:11:01.282 --rc geninfo_unexecuted_blocks=1 00:11:01.282 00:11:01.282 ' 00:11:01.282 07:21:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=66531 00:11:01.282 07:21:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:01.282 07:21:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:01.282 07:21:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 66531 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 66531 ']' 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.282 07:21:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.282 [2024-11-20 07:21:05.063474] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:01.282 [2024-11-20 07:21:05.063596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66531 ] 00:11:01.540 [2024-11-20 07:21:05.241605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.540 [2024-11-20 07:21:05.377675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:01.540 [2024-11-20 07:21:05.377760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 66531' to capture a snapshot of events at runtime. 00:11:01.540 [2024-11-20 07:21:05.377773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.540 [2024-11-20 07:21:05.377785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.540 [2024-11-20 07:21:05.377797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid66531 for offline analysis/debug. 00:11:01.540 [2024-11-20 07:21:05.379308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.918 07:21:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.918 07:21:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.919 07:21:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.919 07:21:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.919 07:21:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:02.919 07:21:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:02.919 07:21:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.919 07:21:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.919 07:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 ************************************ 00:11:02.919 START TEST rpc_integrity 00:11:02.919 ************************************ 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:02.919 { 00:11:02.919 "name": "Malloc0", 00:11:02.919 "aliases": [ 00:11:02.919 "e5fd94f1-befa-4ff8-9a69-8c85573f96b6" 00:11:02.919 ], 00:11:02.919 "product_name": "Malloc disk", 00:11:02.919 "block_size": 512, 00:11:02.919 "num_blocks": 16384, 00:11:02.919 "uuid": "e5fd94f1-befa-4ff8-9a69-8c85573f96b6", 00:11:02.919 "assigned_rate_limits": { 00:11:02.919 "rw_ios_per_sec": 0, 00:11:02.919 "rw_mbytes_per_sec": 0, 00:11:02.919 "r_mbytes_per_sec": 0, 00:11:02.919 "w_mbytes_per_sec": 0 00:11:02.919 }, 00:11:02.919 "claimed": false, 00:11:02.919 "zoned": false, 00:11:02.919 "supported_io_types": { 00:11:02.919 "read": true, 00:11:02.919 "write": true, 00:11:02.919 "unmap": true, 00:11:02.919 "flush": true, 00:11:02.919 "reset": true, 00:11:02.919 "nvme_admin": false, 00:11:02.919 "nvme_io": false, 00:11:02.919 "nvme_io_md": false, 00:11:02.919 "write_zeroes": true, 00:11:02.919 "zcopy": true, 00:11:02.919 "get_zone_info": false, 00:11:02.919 "zone_management": false, 00:11:02.919 "zone_append": false, 00:11:02.919 "compare": false, 00:11:02.919 "compare_and_write": false, 00:11:02.919 "abort": true, 00:11:02.919 "seek_hole": false, 00:11:02.919 "seek_data": false, 00:11:02.919 "copy": true, 00:11:02.919 "nvme_iov_md": false 00:11:02.919 }, 00:11:02.919 "memory_domains": [ 00:11:02.919 { 00:11:02.919 "dma_device_id": "system", 00:11:02.919 "dma_device_type": 1 00:11:02.919 }, 00:11:02.919 { 00:11:02.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.919 "dma_device_type": 2 00:11:02.919 } 00:11:02.919 ], 00:11:02.919 "driver_specific": {} 00:11:02.919 } 00:11:02.919 ]' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 [2024-11-20 07:21:06.549853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:02.919 [2024-11-20 07:21:06.549938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.919 [2024-11-20 07:21:06.549964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:11:02.919 [2024-11-20 07:21:06.549979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.919 [2024-11-20 07:21:06.552529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.919 [2024-11-20 07:21:06.552583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:02.919 Passthru0 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:02.919 { 00:11:02.919 "name": "Malloc0", 00:11:02.919 "aliases": [ 00:11:02.919 "e5fd94f1-befa-4ff8-9a69-8c85573f96b6" 00:11:02.919 ], 00:11:02.919 "product_name": "Malloc disk", 00:11:02.919 "block_size": 512, 00:11:02.919 "num_blocks": 16384, 00:11:02.919 "uuid": "e5fd94f1-befa-4ff8-9a69-8c85573f96b6", 00:11:02.919 "assigned_rate_limits": { 00:11:02.919 "rw_ios_per_sec": 0, 00:11:02.919 "rw_mbytes_per_sec": 0, 00:11:02.919 "r_mbytes_per_sec": 0, 00:11:02.919 "w_mbytes_per_sec": 0 00:11:02.919 }, 00:11:02.919 "claimed": true, 00:11:02.919 "claim_type": "exclusive_write", 00:11:02.919 "zoned": false, 00:11:02.919 "supported_io_types": { 00:11:02.919 "read": true, 00:11:02.919 "write": true, 00:11:02.919 "unmap": true, 00:11:02.919 "flush": true, 00:11:02.919 "reset": true, 00:11:02.919 "nvme_admin": false, 00:11:02.919 "nvme_io": false, 00:11:02.919 "nvme_io_md": false, 00:11:02.919 "write_zeroes": true, 00:11:02.919 "zcopy": true, 00:11:02.919 "get_zone_info": false, 00:11:02.919 "zone_management": false, 00:11:02.919 "zone_append": false, 00:11:02.919 "compare": false, 00:11:02.919 "compare_and_write": false, 00:11:02.919 "abort": true, 00:11:02.919 "seek_hole": false, 00:11:02.919 "seek_data": false, 00:11:02.919 "copy": true, 00:11:02.919 "nvme_iov_md": false 00:11:02.919 }, 00:11:02.919 "memory_domains": [ 00:11:02.919 { 00:11:02.919 "dma_device_id": "system", 00:11:02.919 "dma_device_type": 1 00:11:02.919 }, 00:11:02.919 { 00:11:02.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.919 "dma_device_type": 2 00:11:02.919 } 00:11:02.919 ], 00:11:02.919 "driver_specific": {} 00:11:02.919 }, 00:11:02.919 { 00:11:02.919 "name": "Passthru0", 00:11:02.919 "aliases": [ 00:11:02.919 "3a92e9d5-7afb-52a3-9c7a-3997769daa6f" 00:11:02.919 ], 00:11:02.919 "product_name": "passthru", 00:11:02.919 "block_size": 512, 00:11:02.919 "num_blocks": 16384, 00:11:02.919 "uuid": "3a92e9d5-7afb-52a3-9c7a-3997769daa6f", 00:11:02.919 "assigned_rate_limits": { 00:11:02.919 "rw_ios_per_sec": 0, 00:11:02.919 "rw_mbytes_per_sec": 0, 00:11:02.919 "r_mbytes_per_sec": 0, 00:11:02.919 "w_mbytes_per_sec": 0 00:11:02.919 }, 00:11:02.919 "claimed": false, 00:11:02.919 "zoned": false, 00:11:02.919 "supported_io_types": { 00:11:02.919 "read": true, 00:11:02.919 "write": true, 00:11:02.919 "unmap": true, 00:11:02.919 "flush": true, 00:11:02.919 "reset": true, 00:11:02.919 "nvme_admin": false, 00:11:02.919 "nvme_io": false, 00:11:02.919 "nvme_io_md": false, 00:11:02.919 "write_zeroes": true, 00:11:02.919 "zcopy": true, 00:11:02.919 "get_zone_info": false, 00:11:02.919 "zone_management": false, 00:11:02.919 "zone_append": false, 00:11:02.919 "compare": false, 00:11:02.919 "compare_and_write": false, 00:11:02.919 "abort": true, 00:11:02.919 "seek_hole": false, 00:11:02.919 "seek_data": false, 00:11:02.919 "copy": true, 00:11:02.919 "nvme_iov_md": false 00:11:02.919 }, 00:11:02.919 "memory_domains": [ 00:11:02.919 { 00:11:02.919 "dma_device_id": "system", 00:11:02.919 "dma_device_type": 1 00:11:02.919 }, 00:11:02.919 { 00:11:02.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.919 "dma_device_type": 2 00:11:02.919 } 00:11:02.919 ], 00:11:02.919 "driver_specific": { 00:11:02.919 "passthru": { 00:11:02.919 "name": "Passthru0", 00:11:02.919 "base_bdev_name": "Malloc0" 00:11:02.919 } 00:11:02.919 } 00:11:02.919 } 00:11:02.919 ]' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.919 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:02.920 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:02.920 07:21:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:02.920 00:11:02.920 real 0m0.209s 00:11:02.920 user 0m0.058s 00:11:02.920 sys 0m0.053s 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 ************************************ 00:11:02.920 END TEST rpc_integrity 00:11:02.920 ************************************ 00:11:02.920 07:21:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:02.920 07:21:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.920 07:21:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.920 07:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 ************************************ 00:11:02.920 START TEST rpc_plugins 00:11:02.920 ************************************ 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:02.920 { 00:11:02.920 "name": "Malloc1", 00:11:02.920 "aliases": [ 00:11:02.920 "3a355b28-9b6b-44aa-85ca-0fce8434da49" 00:11:02.920 ], 00:11:02.920 "product_name": "Malloc disk", 00:11:02.920 "block_size": 4096, 00:11:02.920 "num_blocks": 256, 00:11:02.920 "uuid": "3a355b28-9b6b-44aa-85ca-0fce8434da49", 00:11:02.920 "assigned_rate_limits": { 00:11:02.920 "rw_ios_per_sec": 0, 00:11:02.920 "rw_mbytes_per_sec": 0, 00:11:02.920 "r_mbytes_per_sec": 0, 00:11:02.920 "w_mbytes_per_sec": 0 00:11:02.920 }, 00:11:02.920 "claimed": false, 00:11:02.920 "zoned": false, 00:11:02.920 "supported_io_types": { 00:11:02.920 "read": true, 00:11:02.920 "write": true, 00:11:02.920 "unmap": true, 00:11:02.920 "flush": true, 00:11:02.920 "reset": true, 00:11:02.920 "nvme_admin": false, 00:11:02.920 "nvme_io": false, 00:11:02.920 "nvme_io_md": false, 00:11:02.920 "write_zeroes": true, 00:11:02.920 "zcopy": true, 00:11:02.920 "get_zone_info": false, 00:11:02.920 "zone_management": false, 00:11:02.920 "zone_append": false, 00:11:02.920 "compare": false, 00:11:02.920 "compare_and_write": false, 00:11:02.920 "abort": true, 00:11:02.920 "seek_hole": false, 00:11:02.920 "seek_data": false, 00:11:02.920 "copy": true, 00:11:02.920 "nvme_iov_md": false 00:11:02.920 }, 00:11:02.920 "memory_domains": [ 00:11:02.920 { 00:11:02.920 "dma_device_id": "system", 00:11:02.920 "dma_device_type": 1 00:11:02.920 }, 00:11:02.920 { 00:11:02.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.920 "dma_device_type": 2 00:11:02.920 } 00:11:02.920 ], 00:11:02.920 "driver_specific": {} 00:11:02.920 } 00:11:02.920 ]' 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:02.920 07:21:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:02.920 00:11:02.920 real 0m0.085s 00:11:02.920 user 0m0.030s 00:11:02.920 sys 0m0.023s 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.920 07:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.920 ************************************ 00:11:02.920 END TEST rpc_plugins 00:11:02.920 ************************************ 00:11:03.179 07:21:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:03.179 07:21:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.179 07:21:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.179 07:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.179 ************************************ 00:11:03.179 START TEST rpc_trace_cmd_test 00:11:03.179 ************************************ 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:03.179 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid66531", 00:11:03.179 "tpoint_group_mask": "0x8", 00:11:03.179 "iscsi_conn": { 00:11:03.179 "mask": "0x2", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "scsi": { 00:11:03.179 "mask": "0x4", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "bdev": { 00:11:03.179 "mask": "0x8", 00:11:03.179 "tpoint_mask": "0xffffffffffffffff" 00:11:03.179 }, 00:11:03.179 "nvmf_rdma": { 00:11:03.179 "mask": "0x10", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "nvmf_tcp": { 00:11:03.179 "mask": "0x20", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "ftl": { 00:11:03.179 "mask": "0x40", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "blobfs": { 00:11:03.179 "mask": "0x80", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "dsa": { 00:11:03.179 "mask": "0x200", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "thread": { 00:11:03.179 "mask": "0x400", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "nvme_pcie": { 00:11:03.179 "mask": "0x800", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "iaa": { 00:11:03.179 "mask": "0x1000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "nvme_tcp": { 00:11:03.179 "mask": "0x2000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "bdev_nvme": { 00:11:03.179 "mask": "0x4000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "sock": { 00:11:03.179 "mask": "0x8000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "blob": { 00:11:03.179 "mask": "0x10000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "bdev_raid": { 00:11:03.179 "mask": "0x20000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 }, 00:11:03.179 "scheduler": { 00:11:03.179 "mask": "0x40000", 00:11:03.179 "tpoint_mask": "0x0" 00:11:03.179 } 00:11:03.179 }' 00:11:03.179 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:03.180 00:11:03.180 real 0m0.072s 00:11:03.180 user 0m0.036s 00:11:03.180 sys 0m0.030s 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.180 07:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 ************************************ 00:11:03.180 END TEST rpc_trace_cmd_test 00:11:03.180 ************************************ 00:11:03.180 07:21:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:03.180 07:21:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:03.180 07:21:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:03.180 07:21:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.180 07:21:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.180 07:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 ************************************ 00:11:03.180 START TEST rpc_daemon_integrity 00:11:03.180 ************************************ 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:03.180 { 00:11:03.180 "name": "Malloc2", 00:11:03.180 "aliases": [ 00:11:03.180 "598f4dc1-b6ce-4df5-a0fc-e2c7a27b02d5" 00:11:03.180 ], 00:11:03.180 "product_name": "Malloc disk", 00:11:03.180 "block_size": 512, 00:11:03.180 "num_blocks": 16384, 00:11:03.180 "uuid": "598f4dc1-b6ce-4df5-a0fc-e2c7a27b02d5", 00:11:03.180 "assigned_rate_limits": { 00:11:03.180 "rw_ios_per_sec": 0, 00:11:03.180 "rw_mbytes_per_sec": 0, 00:11:03.180 "r_mbytes_per_sec": 0, 00:11:03.180 "w_mbytes_per_sec": 0 00:11:03.180 }, 00:11:03.180 "claimed": false, 00:11:03.180 "zoned": false, 00:11:03.180 "supported_io_types": { 00:11:03.180 "read": true, 00:11:03.180 "write": true, 00:11:03.180 "unmap": true, 00:11:03.180 "flush": true, 00:11:03.180 "reset": true, 00:11:03.180 "nvme_admin": false, 00:11:03.180 "nvme_io": false, 00:11:03.180 "nvme_io_md": false, 00:11:03.180 "write_zeroes": true, 00:11:03.180 "zcopy": true, 00:11:03.180 "get_zone_info": false, 00:11:03.180 "zone_management": false, 00:11:03.180 "zone_append": false, 00:11:03.180 "compare": false, 00:11:03.180 "compare_and_write": false, 00:11:03.180 "abort": true, 00:11:03.180 "seek_hole": false, 00:11:03.180 "seek_data": false, 00:11:03.180 "copy": true, 00:11:03.180 "nvme_iov_md": false 00:11:03.180 }, 00:11:03.180 "memory_domains": [ 00:11:03.180 { 00:11:03.180 "dma_device_id": "system", 00:11:03.180 "dma_device_type": 1 00:11:03.180 }, 00:11:03.180 { 00:11:03.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.180 "dma_device_type": 2 00:11:03.180 } 00:11:03.180 ], 00:11:03.180 "driver_specific": {} 00:11:03.180 } 00:11:03.180 ]' 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.180 [2024-11-20 07:21:07.093934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:03.180 [2024-11-20 07:21:07.094012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.180 [2024-11-20 07:21:07.094036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:11:03.180 [2024-11-20 07:21:07.094050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.180 [2024-11-20 07:21:07.096563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.180 [2024-11-20 07:21:07.096614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:03.180 Passthru0 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.180 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:03.439 { 00:11:03.439 "name": "Malloc2", 00:11:03.439 "aliases": [ 00:11:03.439 "598f4dc1-b6ce-4df5-a0fc-e2c7a27b02d5" 00:11:03.439 ], 00:11:03.439 "product_name": "Malloc disk", 00:11:03.439 "block_size": 512, 00:11:03.439 "num_blocks": 16384, 00:11:03.439 "uuid": "598f4dc1-b6ce-4df5-a0fc-e2c7a27b02d5", 00:11:03.439 "assigned_rate_limits": { 00:11:03.439 "rw_ios_per_sec": 0, 00:11:03.439 "rw_mbytes_per_sec": 0, 00:11:03.439 "r_mbytes_per_sec": 0, 00:11:03.439 "w_mbytes_per_sec": 0 00:11:03.439 }, 00:11:03.439 "claimed": true, 00:11:03.439 "claim_type": "exclusive_write", 00:11:03.439 "zoned": false, 00:11:03.439 "supported_io_types": { 00:11:03.439 "read": true, 00:11:03.439 "write": true, 00:11:03.439 "unmap": true, 00:11:03.439 "flush": true, 00:11:03.439 "reset": true, 00:11:03.439 "nvme_admin": false, 00:11:03.439 "nvme_io": false, 00:11:03.439 "nvme_io_md": false, 00:11:03.439 "write_zeroes": true, 00:11:03.439 "zcopy": true, 00:11:03.439 "get_zone_info": false, 00:11:03.439 "zone_management": false, 00:11:03.439 "zone_append": false, 00:11:03.439 "compare": false, 00:11:03.439 "compare_and_write": false, 00:11:03.439 "abort": true, 00:11:03.439 "seek_hole": false, 00:11:03.439 "seek_data": false, 00:11:03.439 "copy": true, 00:11:03.439 "nvme_iov_md": false 00:11:03.439 }, 00:11:03.439 "memory_domains": [ 00:11:03.439 { 00:11:03.439 "dma_device_id": "system", 00:11:03.439 "dma_device_type": 1 00:11:03.439 }, 00:11:03.439 { 00:11:03.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.439 "dma_device_type": 2 00:11:03.439 } 00:11:03.439 ], 00:11:03.439 "driver_specific": {} 00:11:03.439 }, 00:11:03.439 { 00:11:03.439 "name": "Passthru0", 00:11:03.439 "aliases": [ 00:11:03.439 "c7d6c20f-ddc5-5955-baab-075765d66829" 00:11:03.439 ], 00:11:03.439 "product_name": "passthru", 00:11:03.439 "block_size": 512, 00:11:03.439 "num_blocks": 16384, 00:11:03.439 "uuid": "c7d6c20f-ddc5-5955-baab-075765d66829", 00:11:03.439 "assigned_rate_limits": { 00:11:03.439 "rw_ios_per_sec": 0, 00:11:03.439 "rw_mbytes_per_sec": 0, 00:11:03.439 "r_mbytes_per_sec": 0, 00:11:03.439 "w_mbytes_per_sec": 0 00:11:03.439 }, 00:11:03.439 "claimed": false, 00:11:03.439 "zoned": false, 00:11:03.439 "supported_io_types": { 00:11:03.439 "read": true, 00:11:03.439 "write": true, 00:11:03.439 "unmap": true, 00:11:03.439 "flush": true, 00:11:03.439 "reset": true, 00:11:03.439 "nvme_admin": false, 00:11:03.439 "nvme_io": false, 00:11:03.439 "nvme_io_md": false, 00:11:03.439 "write_zeroes": true, 00:11:03.439 "zcopy": true, 00:11:03.439 "get_zone_info": false, 00:11:03.439 "zone_management": false, 00:11:03.439 "zone_append": false, 00:11:03.439 "compare": false, 00:11:03.439 "compare_and_write": false, 00:11:03.439 "abort": true, 00:11:03.439 "seek_hole": false, 00:11:03.439 "seek_data": false, 00:11:03.439 "copy": true, 00:11:03.439 "nvme_iov_md": false 00:11:03.439 }, 00:11:03.439 "memory_domains": [ 00:11:03.439 { 00:11:03.439 "dma_device_id": "system", 00:11:03.439 "dma_device_type": 1 00:11:03.439 }, 00:11:03.439 { 00:11:03.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.439 "dma_device_type": 2 00:11:03.439 } 00:11:03.439 ], 00:11:03.439 "driver_specific": { 00:11:03.439 "passthru": { 00:11:03.439 "name": "Passthru0", 00:11:03.439 "base_bdev_name": "Malloc2" 00:11:03.439 } 00:11:03.439 } 00:11:03.439 } 00:11:03.439 ]' 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:03.439 00:11:03.439 real 0m0.213s 00:11:03.439 user 0m0.061s 00:11:03.439 sys 0m0.052s 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.439 07:21:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 ************************************ 00:11:03.439 END TEST rpc_daemon_integrity 00:11:03.439 ************************************ 00:11:03.439 07:21:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:03.439 07:21:07 rpc -- rpc/rpc.sh@84 -- # killprocess 66531 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 66531 ']' 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@958 -- # kill -0 66531 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66531 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.439 killing process with pid 66531 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66531' 00:11:03.439 07:21:07 rpc -- common/autotest_common.sh@973 -- # kill 66531 00:11:03.440 07:21:07 rpc -- common/autotest_common.sh@978 -- # wait 66531 00:11:06.727 00:11:06.727 real 0m5.299s 00:11:06.727 user 0m5.331s 00:11:06.727 sys 0m0.981s 00:11:06.727 07:21:10 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.727 07:21:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.727 ************************************ 00:11:06.727 END TEST rpc 00:11:06.727 ************************************ 00:11:06.727 07:21:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:06.727 07:21:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.727 07:21:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.727 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.727 ************************************ 00:11:06.728 START TEST skip_rpc 00:11:06.728 ************************************ 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:06.728 * Looking for test storage... 00:11:06.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.728 07:21:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.728 --rc genhtml_branch_coverage=1 00:11:06.728 --rc genhtml_function_coverage=1 00:11:06.728 --rc genhtml_legend=1 00:11:06.728 --rc geninfo_all_blocks=1 00:11:06.728 --rc geninfo_unexecuted_blocks=1 00:11:06.728 00:11:06.728 ' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.728 --rc genhtml_branch_coverage=1 00:11:06.728 --rc genhtml_function_coverage=1 00:11:06.728 --rc genhtml_legend=1 00:11:06.728 --rc geninfo_all_blocks=1 00:11:06.728 --rc geninfo_unexecuted_blocks=1 00:11:06.728 00:11:06.728 ' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.728 --rc genhtml_branch_coverage=1 00:11:06.728 --rc genhtml_function_coverage=1 00:11:06.728 --rc genhtml_legend=1 00:11:06.728 --rc geninfo_all_blocks=1 00:11:06.728 --rc geninfo_unexecuted_blocks=1 00:11:06.728 00:11:06.728 ' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.728 --rc genhtml_branch_coverage=1 00:11:06.728 --rc genhtml_function_coverage=1 00:11:06.728 --rc genhtml_legend=1 00:11:06.728 --rc geninfo_all_blocks=1 00:11:06.728 --rc geninfo_unexecuted_blocks=1 00:11:06.728 00:11:06.728 ' 00:11:06.728 07:21:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:06.728 07:21:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:06.728 07:21:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.728 07:21:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.728 ************************************ 00:11:06.728 START TEST skip_rpc 00:11:06.728 ************************************ 00:11:06.728 07:21:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:11:06.728 07:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=66760 00:11:06.728 07:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:06.728 07:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:06.728 07:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:06.728 [2024-11-20 07:21:10.393523] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:06.728 [2024-11-20 07:21:10.393759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66760 ] 00:11:06.728 [2024-11-20 07:21:10.595184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.988 [2024-11-20 07:21:10.741643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 66760 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 66760 ']' 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 66760 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66760 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.377 killing process with pid 66760 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66760' 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 66760 00:11:12.377 07:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 66760 00:11:14.306 00:11:14.306 real 0m7.773s 00:11:14.306 user 0m7.269s 00:11:14.306 sys 0m0.434s 00:11:14.306 07:21:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.306 07:21:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.306 ************************************ 00:11:14.306 END TEST skip_rpc 00:11:14.306 ************************************ 00:11:14.306 07:21:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:14.306 07:21:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.306 07:21:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.306 07:21:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.306 ************************************ 00:11:14.306 START TEST skip_rpc_with_json 00:11:14.306 ************************************ 00:11:14.306 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=66875 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 66875 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 66875 ']' 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.307 07:21:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:14.307 [2024-11-20 07:21:18.208972] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:14.307 [2024-11-20 07:21:18.209123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66875 ] 00:11:14.565 [2024-11-20 07:21:18.373775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.823 [2024-11-20 07:21:18.522223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:15.759 [2024-11-20 07:21:19.614490] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:15.759 request: 00:11:15.759 { 00:11:15.759 "trtype": "tcp", 00:11:15.759 "method": "nvmf_get_transports", 00:11:15.759 "req_id": 1 00:11:15.759 } 00:11:15.759 Got JSON-RPC error response 00:11:15.759 response: 00:11:15.759 { 00:11:15.759 "code": -19, 00:11:15.759 "message": "No such device" 00:11:15.759 } 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:15.759 [2024-11-20 07:21:19.622693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.759 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:16.018 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.018 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:16.018 { 00:11:16.018 "subsystems": [ 00:11:16.018 { 00:11:16.018 "subsystem": "scheduler", 00:11:16.018 "config": [ 00:11:16.018 { 00:11:16.018 "method": "framework_set_scheduler", 00:11:16.018 "params": { 00:11:16.018 "name": "static" 00:11:16.018 } 00:11:16.018 } 00:11:16.018 ] 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "subsystem": "vmd", 00:11:16.018 "config": [] 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "subsystem": "sock", 00:11:16.018 "config": [ 00:11:16.018 { 00:11:16.018 "method": "sock_set_default_impl", 00:11:16.018 "params": { 00:11:16.018 "impl_name": "posix" 00:11:16.018 } 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "method": "sock_impl_set_options", 00:11:16.018 "params": { 00:11:16.018 "impl_name": "ssl", 00:11:16.018 "recv_buf_size": 4096, 00:11:16.018 "send_buf_size": 4096, 00:11:16.018 "enable_recv_pipe": true, 00:11:16.018 "enable_quickack": false, 00:11:16.018 "enable_placement_id": 0, 00:11:16.018 "enable_zerocopy_send_server": true, 00:11:16.018 "enable_zerocopy_send_client": false, 00:11:16.018 "zerocopy_threshold": 0, 00:11:16.018 "tls_version": 0, 00:11:16.018 "enable_ktls": false 00:11:16.018 } 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "method": "sock_impl_set_options", 00:11:16.018 "params": { 00:11:16.018 "impl_name": "posix", 00:11:16.018 "recv_buf_size": 2097152, 00:11:16.018 "send_buf_size": 2097152, 00:11:16.018 "enable_recv_pipe": true, 00:11:16.018 "enable_quickack": false, 00:11:16.018 "enable_placement_id": 0, 00:11:16.018 "enable_zerocopy_send_server": true, 00:11:16.018 "enable_zerocopy_send_client": false, 00:11:16.018 "zerocopy_threshold": 0, 00:11:16.018 "tls_version": 0, 00:11:16.018 "enable_ktls": false 00:11:16.018 } 00:11:16.018 } 00:11:16.018 ] 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "subsystem": "iobuf", 00:11:16.018 "config": [ 00:11:16.018 { 00:11:16.018 "method": "iobuf_set_options", 00:11:16.018 "params": { 00:11:16.018 "small_pool_count": 8192, 00:11:16.018 "large_pool_count": 1024, 00:11:16.018 "small_bufsize": 8192, 00:11:16.018 "large_bufsize": 135168, 00:11:16.018 "enable_numa": false 00:11:16.018 } 00:11:16.018 } 00:11:16.018 ] 00:11:16.018 }, 00:11:16.018 { 00:11:16.018 "subsystem": "keyring", 00:11:16.019 "config": [] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "fsdev", 00:11:16.019 "config": [ 00:11:16.019 { 00:11:16.019 "method": "fsdev_set_opts", 00:11:16.019 "params": { 00:11:16.019 "fsdev_io_pool_size": 65535, 00:11:16.019 "fsdev_io_cache_size": 256 00:11:16.019 } 00:11:16.019 } 00:11:16.019 ] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "accel", 00:11:16.019 "config": [ 00:11:16.019 { 00:11:16.019 "method": "accel_set_options", 00:11:16.019 "params": { 00:11:16.019 "small_cache_size": 128, 00:11:16.019 "large_cache_size": 16, 00:11:16.019 "task_count": 2048, 00:11:16.019 "sequence_count": 2048, 00:11:16.019 "buf_count": 2048 00:11:16.019 } 00:11:16.019 } 00:11:16.019 ] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "bdev", 00:11:16.019 "config": [ 00:11:16.019 { 00:11:16.019 "method": "bdev_set_options", 00:11:16.019 "params": { 00:11:16.019 "bdev_io_pool_size": 65535, 00:11:16.019 "bdev_io_cache_size": 256, 00:11:16.019 "bdev_auto_examine": true, 00:11:16.019 "iobuf_small_cache_size": 128, 00:11:16.019 "iobuf_large_cache_size": 16 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "bdev_raid_set_options", 00:11:16.019 "params": { 00:11:16.019 "process_window_size_kb": 1024, 00:11:16.019 "process_max_bandwidth_mb_sec": 0 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "bdev_nvme_set_options", 00:11:16.019 "params": { 00:11:16.019 "action_on_timeout": "none", 00:11:16.019 "timeout_us": 0, 00:11:16.019 "timeout_admin_us": 0, 00:11:16.019 "keep_alive_timeout_ms": 10000, 00:11:16.019 "arbitration_burst": 0, 00:11:16.019 "low_priority_weight": 0, 00:11:16.019 "medium_priority_weight": 0, 00:11:16.019 "high_priority_weight": 0, 00:11:16.019 "nvme_adminq_poll_period_us": 10000, 00:11:16.019 "nvme_ioq_poll_period_us": 0, 00:11:16.019 "io_queue_requests": 0, 00:11:16.019 "delay_cmd_submit": true, 00:11:16.019 "transport_retry_count": 4, 00:11:16.019 "bdev_retry_count": 3, 00:11:16.019 "transport_ack_timeout": 0, 00:11:16.019 "ctrlr_loss_timeout_sec": 0, 00:11:16.019 "reconnect_delay_sec": 0, 00:11:16.019 "fast_io_fail_timeout_sec": 0, 00:11:16.019 "disable_auto_failback": false, 00:11:16.019 "generate_uuids": false, 00:11:16.019 "transport_tos": 0, 00:11:16.019 "nvme_error_stat": false, 00:11:16.019 "rdma_srq_size": 0, 00:11:16.019 "io_path_stat": false, 00:11:16.019 "allow_accel_sequence": false, 00:11:16.019 "rdma_max_cq_size": 0, 00:11:16.019 "rdma_cm_event_timeout_ms": 0, 00:11:16.019 "dhchap_digests": [ 00:11:16.019 "sha256", 00:11:16.019 "sha384", 00:11:16.019 "sha512" 00:11:16.019 ], 00:11:16.019 "dhchap_dhgroups": [ 00:11:16.019 "null", 00:11:16.019 "ffdhe2048", 00:11:16.019 "ffdhe3072", 00:11:16.019 "ffdhe4096", 00:11:16.019 "ffdhe6144", 00:11:16.019 "ffdhe8192" 00:11:16.019 ] 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "bdev_nvme_set_hotplug", 00:11:16.019 "params": { 00:11:16.019 "period_us": 100000, 00:11:16.019 "enable": false 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "bdev_iscsi_set_options", 00:11:16.019 "params": { 00:11:16.019 "timeout_sec": 30 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "bdev_wait_for_examine" 00:11:16.019 } 00:11:16.019 ] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "nvmf", 00:11:16.019 "config": [ 00:11:16.019 { 00:11:16.019 "method": "nvmf_set_config", 00:11:16.019 "params": { 00:11:16.019 "discovery_filter": "match_any", 00:11:16.019 "admin_cmd_passthru": { 00:11:16.019 "identify_ctrlr": false 00:11:16.019 }, 00:11:16.019 "dhchap_digests": [ 00:11:16.019 "sha256", 00:11:16.019 "sha384", 00:11:16.019 "sha512" 00:11:16.019 ], 00:11:16.019 "dhchap_dhgroups": [ 00:11:16.019 "null", 00:11:16.019 "ffdhe2048", 00:11:16.019 "ffdhe3072", 00:11:16.019 "ffdhe4096", 00:11:16.019 "ffdhe6144", 00:11:16.019 "ffdhe8192" 00:11:16.019 ] 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "nvmf_set_max_subsystems", 00:11:16.019 "params": { 00:11:16.019 "max_subsystems": 1024 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "nvmf_set_crdt", 00:11:16.019 "params": { 00:11:16.019 "crdt1": 0, 00:11:16.019 "crdt2": 0, 00:11:16.019 "crdt3": 0 00:11:16.019 } 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "method": "nvmf_create_transport", 00:11:16.019 "params": { 00:11:16.019 "trtype": "TCP", 00:11:16.019 "max_queue_depth": 128, 00:11:16.019 "max_io_qpairs_per_ctrlr": 127, 00:11:16.019 "in_capsule_data_size": 4096, 00:11:16.019 "max_io_size": 131072, 00:11:16.019 "io_unit_size": 131072, 00:11:16.019 "max_aq_depth": 128, 00:11:16.019 "num_shared_buffers": 511, 00:11:16.019 "buf_cache_size": 4294967295, 00:11:16.019 "dif_insert_or_strip": false, 00:11:16.019 "zcopy": false, 00:11:16.019 "c2h_success": true, 00:11:16.019 "sock_priority": 0, 00:11:16.019 "abort_timeout_sec": 1, 00:11:16.019 "ack_timeout": 0, 00:11:16.019 "data_wr_pool_size": 0 00:11:16.019 } 00:11:16.019 } 00:11:16.019 ] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "nbd", 00:11:16.019 "config": [] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "ublk", 00:11:16.019 "config": [] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "vhost_blk", 00:11:16.019 "config": [] 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "scsi", 00:11:16.019 "config": null 00:11:16.019 }, 00:11:16.019 { 00:11:16.019 "subsystem": "iscsi", 00:11:16.019 "config": [ 00:11:16.019 { 00:11:16.019 "method": "iscsi_set_options", 00:11:16.019 "params": { 00:11:16.019 "node_base": "iqn.2016-06.io.spdk", 00:11:16.019 "max_sessions": 128, 00:11:16.019 "max_connections_per_session": 2, 00:11:16.019 "max_queue_depth": 64, 00:11:16.019 "default_time2wait": 2, 00:11:16.019 "default_time2retain": 20, 00:11:16.019 "first_burst_length": 8192, 00:11:16.019 "immediate_data": true, 00:11:16.019 "allow_duplicated_isid": false, 00:11:16.019 "error_recovery_level": 0, 00:11:16.019 "nop_timeout": 60, 00:11:16.019 "nop_in_interval": 30, 00:11:16.020 "disable_chap": false, 00:11:16.020 "require_chap": false, 00:11:16.020 "mutual_chap": false, 00:11:16.020 "chap_group": 0, 00:11:16.020 "max_large_datain_per_connection": 64, 00:11:16.020 "max_r2t_per_connection": 4, 00:11:16.020 "pdu_pool_size": 36864, 00:11:16.020 "immediate_data_pool_size": 16384, 00:11:16.020 "data_out_pool_size": 2048 00:11:16.020 } 00:11:16.020 } 00:11:16.020 ] 00:11:16.020 }, 00:11:16.020 { 00:11:16.020 "subsystem": "vhost_scsi", 00:11:16.020 "config": [] 00:11:16.020 } 00:11:16.020 ] 00:11:16.020 } 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 66875 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 66875 ']' 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 66875 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66875 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.020 killing process with pid 66875 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66875' 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 66875 00:11:16.020 07:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 66875 00:11:19.307 07:21:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=66931 00:11:19.307 07:21:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:19.307 07:21:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 66931 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 66931 ']' 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 66931 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66931 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.608 killing process with pid 66931 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66931' 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 66931 00:11:24.608 07:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 66931 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:26.513 00:11:26.513 real 0m12.141s 00:11:26.513 user 0m11.636s 00:11:26.513 sys 0m0.945s 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:26.513 ************************************ 00:11:26.513 END TEST skip_rpc_with_json 00:11:26.513 ************************************ 00:11:26.513 07:21:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:26.513 07:21:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.513 07:21:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.513 07:21:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.513 ************************************ 00:11:26.513 START TEST skip_rpc_with_delay 00:11:26.513 ************************************ 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:26.513 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.514 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:26.514 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.514 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:26.514 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:26.514 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:26.514 [2024-11-20 07:21:30.421871] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.773 00:11:26.773 real 0m0.162s 00:11:26.773 user 0m0.086s 00:11:26.773 sys 0m0.076s 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.773 07:21:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:26.773 ************************************ 00:11:26.773 END TEST skip_rpc_with_delay 00:11:26.773 ************************************ 00:11:26.773 07:21:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:26.773 07:21:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:26.773 07:21:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:26.773 07:21:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.773 07:21:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.773 07:21:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.773 ************************************ 00:11:26.773 START TEST exit_on_failed_rpc_init 00:11:26.773 ************************************ 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=67065 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 67065 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 67065 ']' 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.773 07:21:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:26.773 [2024-11-20 07:21:30.638640] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:26.773 [2024-11-20 07:21:30.638809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67065 ] 00:11:27.032 [2024-11-20 07:21:30.799972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.032 [2024-11-20 07:21:30.939494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:28.414 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:28.414 [2024-11-20 07:21:32.094279] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:28.414 [2024-11-20 07:21:32.094420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67092 ] 00:11:28.414 [2024-11-20 07:21:32.276873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.678 [2024-11-20 07:21:32.435386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.678 [2024-11-20 07:21:32.435534] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:28.678 [2024-11-20 07:21:32.435566] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:28.678 [2024-11-20 07:21:32.435585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 67065 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 67065 ']' 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 67065 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67065 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.946 killing process with pid 67065 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67065' 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 67065 00:11:28.946 07:21:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 67065 00:11:32.241 00:11:32.242 real 0m4.960s 00:11:32.242 user 0m5.356s 00:11:32.242 sys 0m0.628s 00:11:32.242 07:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.242 07:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 ************************************ 00:11:32.242 END TEST exit_on_failed_rpc_init 00:11:32.242 ************************************ 00:11:32.242 07:21:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:32.242 00:11:32.242 real 0m25.462s 00:11:32.242 user 0m24.532s 00:11:32.242 sys 0m2.350s 00:11:32.242 07:21:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.242 07:21:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 ************************************ 00:11:32.242 END TEST skip_rpc 00:11:32.242 ************************************ 00:11:32.242 07:21:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:32.242 07:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.242 07:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.242 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 ************************************ 00:11:32.242 START TEST rpc_client 00:11:32.242 ************************************ 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:32.242 * Looking for test storage... 00:11:32.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.242 07:21:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.242 --rc genhtml_branch_coverage=1 00:11:32.242 --rc genhtml_function_coverage=1 00:11:32.242 --rc genhtml_legend=1 00:11:32.242 --rc geninfo_all_blocks=1 00:11:32.242 --rc geninfo_unexecuted_blocks=1 00:11:32.242 00:11:32.242 ' 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.242 --rc genhtml_branch_coverage=1 00:11:32.242 --rc genhtml_function_coverage=1 00:11:32.242 --rc genhtml_legend=1 00:11:32.242 --rc geninfo_all_blocks=1 00:11:32.242 --rc geninfo_unexecuted_blocks=1 00:11:32.242 00:11:32.242 ' 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.242 --rc genhtml_branch_coverage=1 00:11:32.242 --rc genhtml_function_coverage=1 00:11:32.242 --rc genhtml_legend=1 00:11:32.242 --rc geninfo_all_blocks=1 00:11:32.242 --rc geninfo_unexecuted_blocks=1 00:11:32.242 00:11:32.242 ' 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.242 --rc genhtml_branch_coverage=1 00:11:32.242 --rc genhtml_function_coverage=1 00:11:32.242 --rc genhtml_legend=1 00:11:32.242 --rc geninfo_all_blocks=1 00:11:32.242 --rc geninfo_unexecuted_blocks=1 00:11:32.242 00:11:32.242 ' 00:11:32.242 07:21:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:32.242 OK 00:11:32.242 07:21:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:32.242 ************************************ 00:11:32.242 END TEST rpc_client 00:11:32.242 ************************************ 00:11:32.242 00:11:32.242 real 0m0.242s 00:11:32.242 user 0m0.131s 00:11:32.242 sys 0m0.124s 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.242 07:21:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 07:21:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:32.242 07:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.242 07:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.242 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 ************************************ 00:11:32.242 START TEST json_config 00:11:32.242 ************************************ 00:11:32.242 07:21:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:32.242 07:21:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:32.242 07:21:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:11:32.242 07:21:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:32.242 07:21:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.242 07:21:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.242 07:21:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.242 07:21:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.242 07:21:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.242 07:21:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.242 07:21:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:32.242 07:21:36 json_config -- scripts/common.sh@345 -- # : 1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.242 07:21:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.242 07:21:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@353 -- # local d=1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.242 07:21:36 json_config -- scripts/common.sh@355 -- # echo 1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.242 07:21:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@353 -- # local d=2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.242 07:21:36 json_config -- scripts/common.sh@355 -- # echo 2 00:11:32.242 07:21:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.243 07:21:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.243 07:21:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.243 07:21:36 json_config -- scripts/common.sh@368 -- # return 0 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.243 --rc genhtml_branch_coverage=1 00:11:32.243 --rc genhtml_function_coverage=1 00:11:32.243 --rc genhtml_legend=1 00:11:32.243 --rc geninfo_all_blocks=1 00:11:32.243 --rc geninfo_unexecuted_blocks=1 00:11:32.243 00:11:32.243 ' 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.243 --rc genhtml_branch_coverage=1 00:11:32.243 --rc genhtml_function_coverage=1 00:11:32.243 --rc genhtml_legend=1 00:11:32.243 --rc geninfo_all_blocks=1 00:11:32.243 --rc geninfo_unexecuted_blocks=1 00:11:32.243 00:11:32.243 ' 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.243 --rc genhtml_branch_coverage=1 00:11:32.243 --rc genhtml_function_coverage=1 00:11:32.243 --rc genhtml_legend=1 00:11:32.243 --rc geninfo_all_blocks=1 00:11:32.243 --rc geninfo_unexecuted_blocks=1 00:11:32.243 00:11:32.243 ' 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.243 --rc genhtml_branch_coverage=1 00:11:32.243 --rc genhtml_function_coverage=1 00:11:32.243 --rc genhtml_legend=1 00:11:32.243 --rc geninfo_all_blocks=1 00:11:32.243 --rc geninfo_unexecuted_blocks=1 00:11:32.243 00:11:32.243 ' 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.243 07:21:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.243 07:21:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.243 07:21:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.243 07:21:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.243 07:21:36 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:32.243 07:21:36 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:32.243 07:21:36 json_config -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:32.243 07:21:36 json_config -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:32.243 07:21:36 json_config -- paths/export.sh@6 -- # export PATH 00:11:32.243 07:21:36 json_config -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:32.243 07:21:36 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:32.243 07:21:36 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:32.243 07:21:36 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@50 -- # : 0 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:32.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:32.243 07:21:36 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:32.243 INFO: JSON configuration test init 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.243 07:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:32.243 07:21:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:11:32.243 07:21:36 json_config -- json_config/common.sh@9 -- # local app=target 00:11:32.243 07:21:36 json_config -- json_config/common.sh@10 -- # shift 00:11:32.244 07:21:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:32.244 07:21:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:32.244 07:21:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:32.244 07:21:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:32.244 07:21:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:32.244 07:21:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=67253 00:11:32.244 07:21:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:32.244 07:21:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:32.244 Waiting for target to run... 00:11:32.244 07:21:36 json_config -- json_config/common.sh@25 -- # waitforlisten 67253 /var/tmp/spdk_tgt.sock 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 67253 ']' 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.244 07:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 [2024-11-20 07:21:36.175929] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:32.503 [2024-11-20 07:21:36.176114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67253 ] 00:11:32.761 [2024-11-20 07:21:36.622535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.019 [2024-11-20 07:21:36.781765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:11:33.278 00:11:33.278 07:21:37 json_config -- json_config/common.sh@26 -- # echo '' 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.278 07:21:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:11:33.278 07:21:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:33.278 07:21:37 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:34.655 07:21:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.655 07:21:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:11:34.655 07:21:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:34.655 07:21:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@54 -- # sort 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:11:34.915 07:21:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.915 07:21:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@285 -- # [[ 1 -eq 1 ]] 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@286 -- # create_bdev_subsystem_config 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@112 -- # timing_enter create_bdev_subsystem_config 00:11:34.915 07:21:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.915 07:21:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@114 -- # expected_notifications=() 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@114 -- # local expected_notifications 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@118 -- # expected_notifications+=($(get_notifications)) 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@118 -- # get_notifications 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0 00:11:34.915 07:21:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:34.915 07:21:38 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@120 -- # [[ 1 -eq 1 ]] 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@121 -- # local lvol_store_base_bdev=Nvme0n1 00:11:35.175 07:21:38 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:11:35.175 07:21:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:11:35.432 Nvme0n1p0 Nvme0n1p1 00:11:35.432 07:21:39 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_split_create Malloc0 3 00:11:35.432 07:21:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:11:35.691 [2024-11-20 07:21:39.459236] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:35.691 [2024-11-20 07:21:39.459321] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:35.691 00:11:35.691 07:21:39 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:11:35.691 07:21:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:11:35.950 Malloc3 00:11:35.950 07:21:39 json_config -- json_config/json_config.sh@126 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:11:35.950 07:21:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:11:36.209 [2024-11-20 07:21:40.010895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.209 [2024-11-20 07:21:40.011011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.209 [2024-11-20 07:21:40.011047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:11:36.209 [2024-11-20 07:21:40.011061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.209 [2024-11-20 07:21:40.013671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.209 [2024-11-20 07:21:40.013744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:11:36.209 PTBdevFromMalloc3 00:11:36.209 07:21:40 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_null_create Null0 32 512 00:11:36.209 07:21:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:11:36.468 Null0 00:11:36.468 07:21:40 json_config -- json_config/json_config.sh@130 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:11:36.468 07:21:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:11:36.729 Malloc0 00:11:36.729 07:21:40 json_config -- json_config/json_config.sh@131 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:11:36.729 07:21:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:11:36.988 Malloc1 00:11:36.988 07:21:40 json_config -- json_config/json_config.sh@144 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:11:36.988 07:21:40 json_config -- json_config/json_config.sh@147 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:11:37.246 102400+0 records in 00:11:37.246 102400+0 records out 00:11:37.246 104857600 bytes (105 MB, 100 MiB) copied, 0.148471 s, 706 MB/s 00:11:37.246 07:21:40 json_config -- json_config/json_config.sh@148 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:11:37.246 07:21:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:11:37.503 aio_disk 00:11:37.503 07:21:41 json_config -- json_config/json_config.sh@149 -- # expected_notifications+=(bdev_register:aio_disk) 00:11:37.503 07:21:41 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:11:37.503 07:21:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:11:37.503 48370e9a-7a6a-488e-b7d3-e79f77a8cffe 00:11:37.763 07:21:41 json_config -- json_config/json_config.sh@161 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:11:37.763 07:21:41 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:11:37.763 07:21:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:11:37.763 07:21:41 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:11:37.763 07:21:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:11:38.057 07:21:41 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:11:38.057 07:21:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:11:38.317 07:21:42 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:11:38.317 07:21:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@164 -- # [[ 0 -eq 1 ]] 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@179 -- # [[ 0 -eq 1 ]] 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@185 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@74 -- # local events_to_check 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@75 -- # local recorded_events 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@78 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@78 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@78 -- # sort 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@79 -- # recorded_events=($(get_notifications | sort)) 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@79 -- # get_notifications 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@79 -- # sort 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0 00:11:38.577 07:21:42 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:11:38.577 07:21:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p1 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc3 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:PTBdevFromMalloc3 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Null0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p2 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p1 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc1 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:aio_disk 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # IFS=: 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@81 -- # [[ bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\9\4\6\2\3\1\d\-\6\7\e\3\-\4\2\c\f\-\8\c\5\5\-\6\4\4\c\2\8\5\0\d\c\1\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\4\0\0\c\c\c\1\-\e\1\6\8\-\4\4\8\6\-\9\b\6\0\-\b\8\e\7\5\9\4\5\0\6\d\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\c\8\4\f\d\9\c\-\b\6\e\3\-\4\5\7\9\-\a\5\0\0\-\9\7\f\d\1\9\d\1\c\3\e\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\2\e\b\8\3\b\2\-\9\8\0\1\-\4\b\8\2\-\9\a\9\8\-\3\b\9\0\1\e\b\6\4\b\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@93 -- # cat 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@93 -- # printf ' %s\n' bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:11:38.837 Expected events matched: 00:11:38.837 bdev_register:2946231d-67e3-42cf-8c55-644c2850dc1e 00:11:38.837 bdev_register:3400ccc1-e168-4486-9b60-b8e7594506d9 00:11:38.837 bdev_register:6c84fd9c-b6e3-4579-a500-97fd19d1c3e8 00:11:38.837 bdev_register:72eb83b2-9801-4b82-9a98-3b901eb64bc0 00:11:38.837 bdev_register:Malloc0 00:11:38.837 bdev_register:Malloc0p0 00:11:38.837 bdev_register:Malloc0p1 00:11:38.837 bdev_register:Malloc0p2 00:11:38.837 bdev_register:Malloc1 00:11:38.837 bdev_register:Malloc3 00:11:38.837 bdev_register:Null0 00:11:38.837 bdev_register:Nvme0n1 00:11:38.837 bdev_register:Nvme0n1p0 00:11:38.837 bdev_register:Nvme0n1p1 00:11:38.837 bdev_register:PTBdevFromMalloc3 00:11:38.837 bdev_register:aio_disk 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@187 -- # timing_exit create_bdev_subsystem_config 00:11:38.837 07:21:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.837 07:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]] 00:11:38.837 07:21:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:11:38.837 07:21:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.837 07:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:39.096 07:21:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:11:39.096 07:21:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:39.096 07:21:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:39.096 MallocBdevForConfigChangeCheck 00:11:39.356 07:21:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:11:39.356 07:21:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.356 07:21:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:39.356 07:21:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:11:39.356 07:21:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:39.613 INFO: shutting down applications... 00:11:39.613 07:21:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:11:39.613 07:21:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:11:39.613 07:21:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:11:39.613 07:21:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:11:39.613 07:21:43 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:39.871 [2024-11-20 07:21:43.685531] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:11:40.129 Calling clear_vhost_scsi_subsystem 00:11:40.129 Calling clear_iscsi_subsystem 00:11:40.129 Calling clear_vhost_blk_subsystem 00:11:40.129 Calling clear_ublk_subsystem 00:11:40.129 Calling clear_nbd_subsystem 00:11:40.129 Calling clear_nvmf_subsystem 00:11:40.129 Calling clear_bdev_subsystem 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:40.129 07:21:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:40.694 07:21:44 json_config -- json_config/json_config.sh@352 -- # break 00:11:40.694 07:21:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:11:40.694 07:21:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:11:40.694 07:21:44 json_config -- json_config/common.sh@31 -- # local app=target 00:11:40.694 07:21:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:40.694 07:21:44 json_config -- json_config/common.sh@35 -- # [[ -n 67253 ]] 00:11:40.694 07:21:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 67253 00:11:40.694 07:21:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:40.694 07:21:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:40.694 07:21:44 json_config -- json_config/common.sh@41 -- # kill -0 67253 00:11:40.694 07:21:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:40.953 07:21:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:40.953 07:21:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:40.953 07:21:44 json_config -- json_config/common.sh@41 -- # kill -0 67253 00:11:40.953 07:21:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:41.518 07:21:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:41.518 07:21:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:41.518 07:21:45 json_config -- json_config/common.sh@41 -- # kill -0 67253 00:11:41.518 07:21:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:42.084 07:21:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:42.084 07:21:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:42.084 07:21:45 json_config -- json_config/common.sh@41 -- # kill -0 67253 00:11:42.084 07:21:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:42.084 07:21:45 json_config -- json_config/common.sh@43 -- # break 00:11:42.084 07:21:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:42.084 SPDK target shutdown done 00:11:42.084 07:21:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:42.084 INFO: relaunching applications... 00:11:42.084 07:21:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:11:42.084 07:21:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:42.084 07:21:45 json_config -- json_config/common.sh@9 -- # local app=target 00:11:42.084 07:21:45 json_config -- json_config/common.sh@10 -- # shift 00:11:42.084 07:21:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:42.084 07:21:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:42.084 07:21:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:42.084 07:21:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:42.084 07:21:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:42.084 07:21:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=67521 00:11:42.084 07:21:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:42.084 Waiting for target to run... 00:11:42.084 07:21:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:42.084 07:21:45 json_config -- json_config/common.sh@25 -- # waitforlisten 67521 /var/tmp/spdk_tgt.sock 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 67521 ']' 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.084 07:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:42.084 [2024-11-20 07:21:45.919044] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:42.084 [2024-11-20 07:21:45.919249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67521 ] 00:11:42.649 [2024-11-20 07:21:46.386523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.649 [2024-11-20 07:21:46.540559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.017 [2024-11-20 07:21:47.609662] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:44.017 [2024-11-20 07:21:47.609775] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:44.017 [2024-11-20 07:21:47.617593] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:44.017 [2024-11-20 07:21:47.617648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:44.017 [2024-11-20 07:21:47.625603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.017 [2024-11-20 07:21:47.625717] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.017 [2024-11-20 07:21:47.625751] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.017 [2024-11-20 07:21:47.727782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.017 [2024-11-20 07:21:47.727969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.017 [2024-11-20 07:21:47.728018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:44.017 [2024-11-20 07:21:47.728062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.017 [2024-11-20 07:21:47.728814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.017 [2024-11-20 07:21:47.728902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:11:44.948 07:21:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.948 07:21:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:11:44.948 07:21:48 json_config -- json_config/common.sh@26 -- # echo '' 00:11:44.948 00:11:44.948 07:21:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:11:44.948 07:21:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:44.948 INFO: Checking if target configuration is the same... 00:11:44.948 07:21:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:11:44.948 07:21:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.948 07:21:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:44.948 + '[' 2 -ne 2 ']' 00:11:44.948 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:44.948 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:44.948 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:44.948 +++ basename /dev/fd/62 00:11:44.948 ++ mktemp /tmp/62.XXX 00:11:44.948 + tmp_file_1=/tmp/62.VHS 00:11:44.948 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:44.948 + tmp_file_2=/tmp/spdk_tgt_config.json.nFh 00:11:44.948 + ret=0 00:11:44.948 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.205 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.205 + diff -u /tmp/62.VHS /tmp/spdk_tgt_config.json.nFh 00:11:45.205 + echo 'INFO: JSON config files are the same' 00:11:45.205 INFO: JSON config files are the same 00:11:45.205 + rm /tmp/62.VHS /tmp/spdk_tgt_config.json.nFh 00:11:45.205 + exit 0 00:11:45.205 07:21:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:11:45.205 07:21:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:45.205 INFO: changing configuration and checking if this can be detected... 00:11:45.205 07:21:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:45.205 07:21:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:45.472 07:21:49 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:45.472 07:21:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:11:45.472 07:21:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:45.472 + '[' 2 -ne 2 ']' 00:11:45.472 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:45.472 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:45.472 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:45.472 +++ basename /dev/fd/62 00:11:45.473 ++ mktemp /tmp/62.XXX 00:11:45.473 + tmp_file_1=/tmp/62.UCR 00:11:45.473 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:45.473 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:45.473 + tmp_file_2=/tmp/spdk_tgt_config.json.5rp 00:11:45.473 + ret=0 00:11:45.473 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.731 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.989 + diff -u /tmp/62.UCR /tmp/spdk_tgt_config.json.5rp 00:11:45.989 + ret=1 00:11:45.989 + echo '=== Start of file: /tmp/62.UCR ===' 00:11:45.989 + cat /tmp/62.UCR 00:11:45.989 + echo '=== End of file: /tmp/62.UCR ===' 00:11:45.989 + echo '' 00:11:45.989 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5rp ===' 00:11:45.989 + cat /tmp/spdk_tgt_config.json.5rp 00:11:45.989 + echo '=== End of file: /tmp/spdk_tgt_config.json.5rp ===' 00:11:45.989 + echo '' 00:11:45.989 + rm /tmp/62.UCR /tmp/spdk_tgt_config.json.5rp 00:11:45.989 + exit 1 00:11:45.989 INFO: configuration change detected. 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:11:45.989 07:21:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.989 07:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 67521 ]] 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:11:45.989 07:21:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:11:45.990 07:21:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.990 07:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:45.990 07:21:49 json_config -- json_config/json_config.sh@193 -- # [[ 1 -eq 1 ]] 00:11:45.990 07:21:49 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:11:45.990 07:21:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:11:46.248 07:21:49 json_config -- json_config/json_config.sh@195 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:11:46.248 07:21:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:11:46.583 07:21:50 json_config -- json_config/json_config.sh@196 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:11:46.583 07:21:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:11:46.583 07:21:50 json_config -- json_config/json_config.sh@197 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:11:46.583 07:21:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:46.842 07:21:50 json_config -- json_config/json_config.sh@330 -- # killprocess 67521 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 67521 ']' 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@958 -- # kill -0 67521 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@959 -- # uname 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.842 07:21:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67521 00:11:47.100 killing process with pid 67521 00:11:47.100 07:21:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.100 07:21:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.100 07:21:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67521' 00:11:47.100 07:21:50 json_config -- common/autotest_common.sh@973 -- # kill 67521 00:11:47.100 07:21:50 json_config -- common/autotest_common.sh@978 -- # wait 67521 00:11:48.037 07:21:51 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:48.037 07:21:51 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:11:48.037 07:21:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.037 07:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:48.037 INFO: Success 00:11:48.037 07:21:51 json_config -- json_config/json_config.sh@335 -- # return 0 00:11:48.037 07:21:51 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:11:48.037 ************************************ 00:11:48.037 END TEST json_config 00:11:48.037 ************************************ 00:11:48.037 00:11:48.037 real 0m16.017s 00:11:48.037 user 0m21.596s 00:11:48.037 sys 0m2.849s 00:11:48.037 07:21:51 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.037 07:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:48.296 07:21:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:48.297 07:21:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.297 07:21:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.297 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:11:48.297 ************************************ 00:11:48.297 START TEST json_config_extra_key 00:11:48.297 ************************************ 00:11:48.297 07:21:51 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.297 --rc genhtml_branch_coverage=1 00:11:48.297 --rc genhtml_function_coverage=1 00:11:48.297 --rc genhtml_legend=1 00:11:48.297 --rc geninfo_all_blocks=1 00:11:48.297 --rc geninfo_unexecuted_blocks=1 00:11:48.297 00:11:48.297 ' 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.297 --rc genhtml_branch_coverage=1 00:11:48.297 --rc genhtml_function_coverage=1 00:11:48.297 --rc genhtml_legend=1 00:11:48.297 --rc geninfo_all_blocks=1 00:11:48.297 --rc geninfo_unexecuted_blocks=1 00:11:48.297 00:11:48.297 ' 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.297 --rc genhtml_branch_coverage=1 00:11:48.297 --rc genhtml_function_coverage=1 00:11:48.297 --rc genhtml_legend=1 00:11:48.297 --rc geninfo_all_blocks=1 00:11:48.297 --rc geninfo_unexecuted_blocks=1 00:11:48.297 00:11:48.297 ' 00:11:48.297 07:21:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.297 --rc genhtml_branch_coverage=1 00:11:48.297 --rc genhtml_function_coverage=1 00:11:48.297 --rc genhtml_legend=1 00:11:48.297 --rc geninfo_all_blocks=1 00:11:48.297 --rc geninfo_unexecuted_blocks=1 00:11:48.297 00:11:48.297 ' 00:11:48.297 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=d2cb5538-f466-4c14-8b32-9b15eda1a8a3 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.297 07:21:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@6 -- # export PATH 00:11:48.297 07:21:52 json_config_extra_key -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:48.297 07:21:52 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:48.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:48.557 07:21:52 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:48.557 INFO: launching applications... 00:11:48.557 07:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=67715 00:11:48.557 07:21:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:48.558 Waiting for target to run... 00:11:48.558 07:21:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 67715 /var/tmp/spdk_tgt.sock 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 67715 ']' 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:48.558 07:21:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.558 07:21:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:48.558 [2024-11-20 07:21:52.317779] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:48.558 [2024-11-20 07:21:52.318046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67715 ] 00:11:49.124 [2024-11-20 07:21:52.941703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.383 [2024-11-20 07:21:53.067962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.317 07:21:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.317 00:11:50.317 INFO: shutting down applications... 00:11:50.317 07:21:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:50.317 07:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:50.317 07:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 67715 ]] 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 67715 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:50.317 07:21:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:50.576 07:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:50.576 07:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:50.576 07:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:50.576 07:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:51.143 07:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:51.143 07:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:51.143 07:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:51.143 07:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:51.715 07:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:51.715 07:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:51.715 07:21:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:51.715 07:21:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:52.290 07:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:52.290 07:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:52.290 07:21:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:52.290 07:21:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:52.857 07:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:52.857 07:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:52.857 07:21:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:52.857 07:21:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:53.116 07:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:53.116 07:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:53.116 07:21:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:53.116 07:21:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 67715 00:11:53.684 SPDK target shutdown done 00:11:53.684 Success 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:53.684 07:21:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:53.684 07:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:53.684 00:11:53.684 real 0m5.496s 00:11:53.684 user 0m4.577s 00:11:53.684 sys 0m0.919s 00:11:53.684 07:21:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.684 07:21:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:53.684 ************************************ 00:11:53.684 END TEST json_config_extra_key 00:11:53.684 ************************************ 00:11:53.684 07:21:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:53.684 07:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.684 07:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.684 07:21:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.684 ************************************ 00:11:53.684 START TEST alias_rpc 00:11:53.684 ************************************ 00:11:53.684 07:21:57 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:53.943 * Looking for test storage... 00:11:53.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:53.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.943 07:21:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.943 --rc genhtml_branch_coverage=1 00:11:53.943 --rc genhtml_function_coverage=1 00:11:53.943 --rc genhtml_legend=1 00:11:53.943 --rc geninfo_all_blocks=1 00:11:53.943 --rc geninfo_unexecuted_blocks=1 00:11:53.943 00:11:53.943 ' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.943 --rc genhtml_branch_coverage=1 00:11:53.943 --rc genhtml_function_coverage=1 00:11:53.943 --rc genhtml_legend=1 00:11:53.943 --rc geninfo_all_blocks=1 00:11:53.943 --rc geninfo_unexecuted_blocks=1 00:11:53.943 00:11:53.943 ' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.943 --rc genhtml_branch_coverage=1 00:11:53.943 --rc genhtml_function_coverage=1 00:11:53.943 --rc genhtml_legend=1 00:11:53.943 --rc geninfo_all_blocks=1 00:11:53.943 --rc geninfo_unexecuted_blocks=1 00:11:53.943 00:11:53.943 ' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.943 --rc genhtml_branch_coverage=1 00:11:53.943 --rc genhtml_function_coverage=1 00:11:53.943 --rc genhtml_legend=1 00:11:53.943 --rc geninfo_all_blocks=1 00:11:53.943 --rc geninfo_unexecuted_blocks=1 00:11:53.943 00:11:53.943 ' 00:11:53.943 07:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:53.943 07:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=67840 00:11:53.943 07:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.943 07:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 67840 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 67840 ']' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.943 07:21:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.943 [2024-11-20 07:21:57.858972] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:53.943 [2024-11-20 07:21:57.859117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67840 ] 00:11:54.202 [2024-11-20 07:21:58.029065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.460 [2024-11-20 07:21:58.176556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.398 07:21:59 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.398 07:21:59 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:55.398 07:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:55.659 07:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 67840 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 67840 ']' 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 67840 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67840 00:11:55.659 killing process with pid 67840 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67840' 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 67840 00:11:55.659 07:21:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 67840 00:11:58.946 00:11:58.946 real 0m4.758s 00:11:58.946 user 0m4.778s 00:11:58.946 sys 0m0.660s 00:11:58.946 07:22:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.946 07:22:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.946 ************************************ 00:11:58.946 END TEST alias_rpc 00:11:58.946 ************************************ 00:11:58.946 07:22:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:58.946 07:22:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:58.946 07:22:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.946 07:22:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.946 07:22:02 -- common/autotest_common.sh@10 -- # set +x 00:11:58.946 ************************************ 00:11:58.946 START TEST spdkcli_tcp 00:11:58.946 ************************************ 00:11:58.946 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:58.946 * Looking for test storage... 00:11:58.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:58.946 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.946 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.946 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.946 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.946 07:22:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.947 07:22:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.947 --rc genhtml_branch_coverage=1 00:11:58.947 --rc genhtml_function_coverage=1 00:11:58.947 --rc genhtml_legend=1 00:11:58.947 --rc geninfo_all_blocks=1 00:11:58.947 --rc geninfo_unexecuted_blocks=1 00:11:58.947 00:11:58.947 ' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.947 --rc genhtml_branch_coverage=1 00:11:58.947 --rc genhtml_function_coverage=1 00:11:58.947 --rc genhtml_legend=1 00:11:58.947 --rc geninfo_all_blocks=1 00:11:58.947 --rc geninfo_unexecuted_blocks=1 00:11:58.947 00:11:58.947 ' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.947 --rc genhtml_branch_coverage=1 00:11:58.947 --rc genhtml_function_coverage=1 00:11:58.947 --rc genhtml_legend=1 00:11:58.947 --rc geninfo_all_blocks=1 00:11:58.947 --rc geninfo_unexecuted_blocks=1 00:11:58.947 00:11:58.947 ' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.947 --rc genhtml_branch_coverage=1 00:11:58.947 --rc genhtml_function_coverage=1 00:11:58.947 --rc genhtml_legend=1 00:11:58.947 --rc geninfo_all_blocks=1 00:11:58.947 --rc geninfo_unexecuted_blocks=1 00:11:58.947 00:11:58.947 ' 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=67951 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:58.947 07:22:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 67951 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 67951 ']' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.947 07:22:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.947 [2024-11-20 07:22:02.689153] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:11:58.947 [2024-11-20 07:22:02.689322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67951 ] 00:11:59.208 [2024-11-20 07:22:02.873909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:59.208 [2024-11-20 07:22:03.026131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.208 [2024-11-20 07:22:03.026166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.587 07:22:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.587 07:22:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:12:00.587 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:00.587 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=67974 00:12:00.587 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:00.587 [ 00:12:00.587 "spdk_get_version", 00:12:00.587 "rpc_get_methods", 00:12:00.587 "notify_get_notifications", 00:12:00.587 "notify_get_types", 00:12:00.587 "trace_get_info", 00:12:00.587 "trace_get_tpoint_group_mask", 00:12:00.587 "trace_disable_tpoint_group", 00:12:00.587 "trace_enable_tpoint_group", 00:12:00.587 "trace_clear_tpoint_mask", 00:12:00.587 "trace_set_tpoint_mask", 00:12:00.587 "fsdev_set_opts", 00:12:00.587 "fsdev_get_opts", 00:12:00.587 "framework_get_pci_devices", 00:12:00.587 "framework_get_config", 00:12:00.587 "framework_get_subsystems", 00:12:00.587 "keyring_get_keys", 00:12:00.587 "iobuf_get_stats", 00:12:00.587 "iobuf_set_options", 00:12:00.587 "sock_get_default_impl", 00:12:00.587 "sock_set_default_impl", 00:12:00.587 "sock_impl_set_options", 00:12:00.587 "sock_impl_get_options", 00:12:00.587 "vmd_rescan", 00:12:00.587 "vmd_remove_device", 00:12:00.587 "vmd_enable", 00:12:00.587 "accel_get_stats", 00:12:00.587 "accel_set_options", 00:12:00.587 "accel_set_driver", 00:12:00.587 "accel_crypto_key_destroy", 00:12:00.587 "accel_crypto_keys_get", 00:12:00.587 "accel_crypto_key_create", 00:12:00.587 "accel_assign_opc", 00:12:00.587 "accel_get_module_info", 00:12:00.587 "accel_get_opc_assignments", 00:12:00.587 "bdev_get_histogram", 00:12:00.587 "bdev_enable_histogram", 00:12:00.587 "bdev_set_qos_limit", 00:12:00.587 "bdev_set_qd_sampling_period", 00:12:00.587 "bdev_get_bdevs", 00:12:00.587 "bdev_reset_iostat", 00:12:00.587 "bdev_get_iostat", 00:12:00.587 "bdev_examine", 00:12:00.587 "bdev_wait_for_examine", 00:12:00.587 "bdev_set_options", 00:12:00.587 "scsi_get_devices", 00:12:00.587 "thread_set_cpumask", 00:12:00.587 "scheduler_set_options", 00:12:00.587 "framework_get_governor", 00:12:00.587 "framework_get_scheduler", 00:12:00.587 "framework_set_scheduler", 00:12:00.587 "framework_get_reactors", 00:12:00.587 "thread_get_io_channels", 00:12:00.587 "thread_get_pollers", 00:12:00.587 "thread_get_stats", 00:12:00.587 "framework_monitor_context_switch", 00:12:00.587 "spdk_kill_instance", 00:12:00.587 "log_enable_timestamps", 00:12:00.587 "log_get_flags", 00:12:00.587 "log_clear_flag", 00:12:00.587 "log_set_flag", 00:12:00.587 "log_get_level", 00:12:00.587 "log_set_level", 00:12:00.587 "log_get_print_level", 00:12:00.587 "log_set_print_level", 00:12:00.588 "framework_enable_cpumask_locks", 00:12:00.588 "framework_disable_cpumask_locks", 00:12:00.588 "framework_wait_init", 00:12:00.588 "framework_start_init", 00:12:00.588 "virtio_blk_create_transport", 00:12:00.588 "virtio_blk_get_transports", 00:12:00.588 "vhost_controller_set_coalescing", 00:12:00.588 "vhost_get_controllers", 00:12:00.588 "vhost_delete_controller", 00:12:00.588 "vhost_create_blk_controller", 00:12:00.588 "vhost_scsi_controller_remove_target", 00:12:00.588 "vhost_scsi_controller_add_target", 00:12:00.588 "vhost_start_scsi_controller", 00:12:00.588 "vhost_create_scsi_controller", 00:12:00.588 "ublk_recover_disk", 00:12:00.588 "ublk_get_disks", 00:12:00.588 "ublk_stop_disk", 00:12:00.588 "ublk_start_disk", 00:12:00.588 "ublk_destroy_target", 00:12:00.588 "ublk_create_target", 00:12:00.588 "nbd_get_disks", 00:12:00.588 "nbd_stop_disk", 00:12:00.588 "nbd_start_disk", 00:12:00.588 "env_dpdk_get_mem_stats", 00:12:00.588 "nvmf_stop_mdns_prr", 00:12:00.588 "nvmf_publish_mdns_prr", 00:12:00.588 "nvmf_subsystem_get_listeners", 00:12:00.588 "nvmf_subsystem_get_qpairs", 00:12:00.588 "nvmf_subsystem_get_controllers", 00:12:00.588 "nvmf_get_stats", 00:12:00.588 "nvmf_get_transports", 00:12:00.588 "nvmf_create_transport", 00:12:00.588 "nvmf_get_targets", 00:12:00.588 "nvmf_delete_target", 00:12:00.588 "nvmf_create_target", 00:12:00.588 "nvmf_subsystem_allow_any_host", 00:12:00.588 "nvmf_subsystem_set_keys", 00:12:00.588 "nvmf_subsystem_remove_host", 00:12:00.588 "nvmf_subsystem_add_host", 00:12:00.588 "nvmf_ns_remove_host", 00:12:00.588 "nvmf_ns_add_host", 00:12:00.588 "nvmf_subsystem_remove_ns", 00:12:00.588 "nvmf_subsystem_set_ns_ana_group", 00:12:00.588 "nvmf_subsystem_add_ns", 00:12:00.588 "nvmf_subsystem_listener_set_ana_state", 00:12:00.588 "nvmf_discovery_get_referrals", 00:12:00.588 "nvmf_discovery_remove_referral", 00:12:00.588 "nvmf_discovery_add_referral", 00:12:00.588 "nvmf_subsystem_remove_listener", 00:12:00.588 "nvmf_subsystem_add_listener", 00:12:00.588 "nvmf_delete_subsystem", 00:12:00.588 "nvmf_create_subsystem", 00:12:00.588 "nvmf_get_subsystems", 00:12:00.588 "nvmf_set_crdt", 00:12:00.588 "nvmf_set_config", 00:12:00.588 "nvmf_set_max_subsystems", 00:12:00.588 "iscsi_get_histogram", 00:12:00.588 "iscsi_enable_histogram", 00:12:00.588 "iscsi_set_options", 00:12:00.588 "iscsi_get_auth_groups", 00:12:00.588 "iscsi_auth_group_remove_secret", 00:12:00.588 "iscsi_auth_group_add_secret", 00:12:00.588 "iscsi_delete_auth_group", 00:12:00.588 "iscsi_create_auth_group", 00:12:00.588 "iscsi_set_discovery_auth", 00:12:00.588 "iscsi_get_options", 00:12:00.588 "iscsi_target_node_request_logout", 00:12:00.588 "iscsi_target_node_set_redirect", 00:12:00.588 "iscsi_target_node_set_auth", 00:12:00.588 "iscsi_target_node_add_lun", 00:12:00.588 "iscsi_get_stats", 00:12:00.588 "iscsi_get_connections", 00:12:00.588 "iscsi_portal_group_set_auth", 00:12:00.588 "iscsi_start_portal_group", 00:12:00.588 "iscsi_delete_portal_group", 00:12:00.588 "iscsi_create_portal_group", 00:12:00.588 "iscsi_get_portal_groups", 00:12:00.588 "iscsi_delete_target_node", 00:12:00.588 "iscsi_target_node_remove_pg_ig_maps", 00:12:00.588 "iscsi_target_node_add_pg_ig_maps", 00:12:00.588 "iscsi_create_target_node", 00:12:00.588 "iscsi_get_target_nodes", 00:12:00.588 "iscsi_delete_initiator_group", 00:12:00.588 "iscsi_initiator_group_remove_initiators", 00:12:00.588 "iscsi_initiator_group_add_initiators", 00:12:00.588 "iscsi_create_initiator_group", 00:12:00.588 "iscsi_get_initiator_groups", 00:12:00.588 "fsdev_aio_delete", 00:12:00.588 "fsdev_aio_create", 00:12:00.588 "keyring_linux_set_options", 00:12:00.588 "keyring_file_remove_key", 00:12:00.588 "keyring_file_add_key", 00:12:00.588 "iaa_scan_accel_module", 00:12:00.588 "dsa_scan_accel_module", 00:12:00.588 "ioat_scan_accel_module", 00:12:00.588 "accel_error_inject_error", 00:12:00.588 "bdev_iscsi_delete", 00:12:00.588 "bdev_iscsi_create", 00:12:00.588 "bdev_iscsi_set_options", 00:12:00.588 "bdev_virtio_attach_controller", 00:12:00.588 "bdev_virtio_scsi_get_devices", 00:12:00.588 "bdev_virtio_detach_controller", 00:12:00.588 "bdev_virtio_blk_set_hotplug", 00:12:00.588 "bdev_ftl_set_property", 00:12:00.588 "bdev_ftl_get_properties", 00:12:00.588 "bdev_ftl_get_stats", 00:12:00.588 "bdev_ftl_unmap", 00:12:00.588 "bdev_ftl_unload", 00:12:00.588 "bdev_ftl_delete", 00:12:00.588 "bdev_ftl_load", 00:12:00.588 "bdev_ftl_create", 00:12:00.588 "bdev_aio_delete", 00:12:00.588 "bdev_aio_rescan", 00:12:00.588 "bdev_aio_create", 00:12:00.588 "blobfs_create", 00:12:00.588 "blobfs_detect", 00:12:00.588 "blobfs_set_cache_size", 00:12:00.588 "bdev_zone_block_delete", 00:12:00.588 "bdev_zone_block_create", 00:12:00.588 "bdev_delay_delete", 00:12:00.588 "bdev_delay_create", 00:12:00.588 "bdev_delay_update_latency", 00:12:00.588 "bdev_split_delete", 00:12:00.588 "bdev_split_create", 00:12:00.588 "bdev_error_inject_error", 00:12:00.588 "bdev_error_delete", 00:12:00.588 "bdev_error_create", 00:12:00.588 "bdev_raid_set_options", 00:12:00.588 "bdev_raid_remove_base_bdev", 00:12:00.588 "bdev_raid_add_base_bdev", 00:12:00.588 "bdev_raid_delete", 00:12:00.588 "bdev_raid_create", 00:12:00.588 "bdev_raid_get_bdevs", 00:12:00.588 "bdev_lvol_set_parent_bdev", 00:12:00.588 "bdev_lvol_set_parent", 00:12:00.588 "bdev_lvol_check_shallow_copy", 00:12:00.588 "bdev_lvol_start_shallow_copy", 00:12:00.588 "bdev_lvol_grow_lvstore", 00:12:00.588 "bdev_lvol_get_lvols", 00:12:00.588 "bdev_lvol_get_lvstores", 00:12:00.588 "bdev_lvol_delete", 00:12:00.588 "bdev_lvol_set_read_only", 00:12:00.588 "bdev_lvol_resize", 00:12:00.588 "bdev_lvol_decouple_parent", 00:12:00.588 "bdev_lvol_inflate", 00:12:00.588 "bdev_lvol_rename", 00:12:00.588 "bdev_lvol_clone_bdev", 00:12:00.588 "bdev_lvol_clone", 00:12:00.588 "bdev_lvol_snapshot", 00:12:00.588 "bdev_lvol_create", 00:12:00.588 "bdev_lvol_delete_lvstore", 00:12:00.588 "bdev_lvol_rename_lvstore", 00:12:00.588 "bdev_lvol_create_lvstore", 00:12:00.588 "bdev_passthru_delete", 00:12:00.588 "bdev_passthru_create", 00:12:00.588 "bdev_nvme_cuse_unregister", 00:12:00.588 "bdev_nvme_cuse_register", 00:12:00.588 "bdev_opal_new_user", 00:12:00.588 "bdev_opal_set_lock_state", 00:12:00.588 "bdev_opal_delete", 00:12:00.588 "bdev_opal_get_info", 00:12:00.588 "bdev_opal_create", 00:12:00.588 "bdev_nvme_opal_revert", 00:12:00.588 "bdev_nvme_opal_init", 00:12:00.588 "bdev_nvme_send_cmd", 00:12:00.588 "bdev_nvme_set_keys", 00:12:00.588 "bdev_nvme_get_path_iostat", 00:12:00.588 "bdev_nvme_get_mdns_discovery_info", 00:12:00.588 "bdev_nvme_stop_mdns_discovery", 00:12:00.588 "bdev_nvme_start_mdns_discovery", 00:12:00.588 "bdev_nvme_set_multipath_policy", 00:12:00.588 "bdev_nvme_set_preferred_path", 00:12:00.588 "bdev_nvme_get_io_paths", 00:12:00.588 "bdev_nvme_remove_error_injection", 00:12:00.588 "bdev_nvme_add_error_injection", 00:12:00.588 "bdev_nvme_get_discovery_info", 00:12:00.588 "bdev_nvme_stop_discovery", 00:12:00.588 "bdev_nvme_start_discovery", 00:12:00.588 "bdev_nvme_get_controller_health_info", 00:12:00.588 "bdev_nvme_disable_controller", 00:12:00.588 "bdev_nvme_enable_controller", 00:12:00.588 "bdev_nvme_reset_controller", 00:12:00.588 "bdev_nvme_get_transport_statistics", 00:12:00.588 "bdev_nvme_apply_firmware", 00:12:00.588 "bdev_nvme_detach_controller", 00:12:00.588 "bdev_nvme_get_controllers", 00:12:00.588 "bdev_nvme_attach_controller", 00:12:00.588 "bdev_nvme_set_hotplug", 00:12:00.588 "bdev_nvme_set_options", 00:12:00.588 "bdev_null_resize", 00:12:00.588 "bdev_null_delete", 00:12:00.588 "bdev_null_create", 00:12:00.588 "bdev_malloc_delete", 00:12:00.588 "bdev_malloc_create" 00:12:00.588 ] 00:12:00.588 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.588 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:00.588 07:22:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 67951 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 67951 ']' 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 67951 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67951 00:12:00.588 killing process with pid 67951 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67951' 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 67951 00:12:00.588 07:22:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 67951 00:12:03.969 00:12:03.969 real 0m4.830s 00:12:03.969 user 0m8.687s 00:12:03.969 sys 0m0.775s 00:12:03.969 07:22:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.969 07:22:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.969 ************************************ 00:12:03.969 END TEST spdkcli_tcp 00:12:03.970 ************************************ 00:12:03.970 07:22:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:03.970 07:22:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.970 07:22:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.970 07:22:07 -- common/autotest_common.sh@10 -- # set +x 00:12:03.970 ************************************ 00:12:03.970 START TEST dpdk_mem_utility 00:12:03.970 ************************************ 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:03.970 * Looking for test storage... 00:12:03.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.970 07:22:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.970 --rc genhtml_branch_coverage=1 00:12:03.970 --rc genhtml_function_coverage=1 00:12:03.970 --rc genhtml_legend=1 00:12:03.970 --rc geninfo_all_blocks=1 00:12:03.970 --rc geninfo_unexecuted_blocks=1 00:12:03.970 00:12:03.970 ' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.970 --rc genhtml_branch_coverage=1 00:12:03.970 --rc genhtml_function_coverage=1 00:12:03.970 --rc genhtml_legend=1 00:12:03.970 --rc geninfo_all_blocks=1 00:12:03.970 --rc geninfo_unexecuted_blocks=1 00:12:03.970 00:12:03.970 ' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.970 --rc genhtml_branch_coverage=1 00:12:03.970 --rc genhtml_function_coverage=1 00:12:03.970 --rc genhtml_legend=1 00:12:03.970 --rc geninfo_all_blocks=1 00:12:03.970 --rc geninfo_unexecuted_blocks=1 00:12:03.970 00:12:03.970 ' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.970 --rc genhtml_branch_coverage=1 00:12:03.970 --rc genhtml_function_coverage=1 00:12:03.970 --rc genhtml_legend=1 00:12:03.970 --rc geninfo_all_blocks=1 00:12:03.970 --rc geninfo_unexecuted_blocks=1 00:12:03.970 00:12:03.970 ' 00:12:03.970 07:22:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:03.970 07:22:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68079 00:12:03.970 07:22:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:03.970 07:22:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68079 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 68079 ']' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.970 07:22:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:03.970 [2024-11-20 07:22:07.580857] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:03.970 [2024-11-20 07:22:07.581017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68079 ] 00:12:03.970 [2024-11-20 07:22:07.762181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.970 [2024-11-20 07:22:07.891662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.353 07:22:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.353 07:22:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:12:05.353 07:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:05.353 07:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:05.353 07:22:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.353 07:22:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:05.353 { 00:12:05.353 "filename": "/tmp/spdk_mem_dump.txt" 00:12:05.353 } 00:12:05.353 07:22:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.353 07:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:05.353 DPDK memory size 816.000000 MiB in 1 heap(s) 00:12:05.353 1 heaps totaling size 816.000000 MiB 00:12:05.353 size: 816.000000 MiB heap id: 0 00:12:05.353 end heaps---------- 00:12:05.353 9 mempools totaling size 595.772034 MiB 00:12:05.353 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:05.353 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:05.353 size: 92.545471 MiB name: bdev_io_68079 00:12:05.353 size: 50.003479 MiB name: msgpool_68079 00:12:05.353 size: 36.509338 MiB name: fsdev_io_68079 00:12:05.353 size: 21.763794 MiB name: PDU_Pool 00:12:05.353 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:05.353 size: 4.133484 MiB name: evtpool_68079 00:12:05.353 size: 0.026123 MiB name: Session_Pool 00:12:05.353 end mempools------- 00:12:05.353 6 memzones totaling size 4.142822 MiB 00:12:05.353 size: 1.000366 MiB name: RG_ring_0_68079 00:12:05.353 size: 1.000366 MiB name: RG_ring_1_68079 00:12:05.353 size: 1.000366 MiB name: RG_ring_4_68079 00:12:05.353 size: 1.000366 MiB name: RG_ring_5_68079 00:12:05.353 size: 0.125366 MiB name: RG_ring_2_68079 00:12:05.353 size: 0.015991 MiB name: RG_ring_3_68079 00:12:05.353 end memzones------- 00:12:05.353 07:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:05.353 heap id: 0 total size: 816.000000 MiB number of busy elements: 306 number of free elements: 18 00:12:05.353 list of free elements. size: 16.793701 MiB 00:12:05.353 element at address: 0x200003e00000 with size: 1.995972 MiB 00:12:05.353 element at address: 0x200008000000 with size: 1.995972 MiB 00:12:05.353 element at address: 0x200010600000 with size: 1.991028 MiB 00:12:05.353 element at address: 0x200018d00040 with size: 0.999939 MiB 00:12:05.353 element at address: 0x200019100040 with size: 0.999939 MiB 00:12:05.353 element at address: 0x200019200000 with size: 0.999084 MiB 00:12:05.353 element at address: 0x200031e00000 with size: 0.994324 MiB 00:12:05.353 element at address: 0x200000400000 with size: 0.992004 MiB 00:12:05.353 element at address: 0x200018a00000 with size: 0.959656 MiB 00:12:05.353 element at address: 0x200019500040 with size: 0.936401 MiB 00:12:05.353 element at address: 0x200000200000 with size: 0.716980 MiB 00:12:05.353 element at address: 0x20001ac00000 with size: 0.563904 MiB 00:12:05.353 element at address: 0x200018e00000 with size: 0.487976 MiB 00:12:05.353 element at address: 0x200019600000 with size: 0.485413 MiB 00:12:05.353 element at address: 0x200000c00000 with size: 0.484802 MiB 00:12:05.353 element at address: 0x200012c00000 with size: 0.443481 MiB 00:12:05.353 element at address: 0x200028000000 with size: 0.390442 MiB 00:12:05.353 element at address: 0x200000800000 with size: 0.356384 MiB 00:12:05.353 list of standard malloc elements. size: 199.285400 MiB 00:12:05.353 element at address: 0x2000081fef80 with size: 132.000183 MiB 00:12:05.353 element at address: 0x200003ffef80 with size: 64.000183 MiB 00:12:05.353 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:12:05.353 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:12:05.353 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:12:05.353 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:12:05.353 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:12:05.353 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:12:05.353 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:12:05.353 element at address: 0x200007fff040 with size: 0.000305 MiB 00:12:05.353 element at address: 0x2000105ff040 with size: 0.000305 MiB 00:12:05.353 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:12:05.353 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c1c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c2c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c3c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c4c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c5c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c6c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c7c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c8c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7c9c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7cac0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7cbc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7ccc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7cdc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7cec0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7cfc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d0c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d1c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d2c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200000cff000 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff180 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff280 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff380 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff480 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff700 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff800 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fff900 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fffa00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fffb00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fffc00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fffd00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007fffe00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200007ffff00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff180 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff280 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff380 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff480 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff580 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff680 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff780 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff880 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ff980 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ffa80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ffb80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105ffc80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000105fff00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71880 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71980 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c72080 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012c72180 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:12:05.354 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:12:05.355 element at address: 0x200028063f40 with size: 0.000244 MiB 00:12:05.355 element at address: 0x200028064040 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806af80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b080 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b180 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b280 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b380 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b480 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b580 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b680 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b780 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b880 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806b980 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806be80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c080 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c180 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c280 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c380 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c480 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c580 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c680 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c780 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c880 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806c980 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d080 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d180 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d280 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d380 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d480 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d580 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d680 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d780 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d880 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806d980 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806da80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806db80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806de80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806df80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e080 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e180 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e280 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e380 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e480 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e580 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e680 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e780 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e880 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806e980 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f080 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f180 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f280 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f380 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f480 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f580 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f680 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f780 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f880 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806f980 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:12:05.355 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:12:05.355 list of memzone associated elements. size: 599.920898 MiB 00:12:05.355 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:12:05.355 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:05.355 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:12:05.355 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:05.355 element at address: 0x200012df4740 with size: 92.045105 MiB 00:12:05.355 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_68079_0 00:12:05.355 element at address: 0x200000dff340 with size: 48.003113 MiB 00:12:05.355 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68079_0 00:12:05.355 element at address: 0x2000107fdb40 with size: 36.008972 MiB 00:12:05.355 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_68079_0 00:12:05.355 element at address: 0x2000197be900 with size: 20.255615 MiB 00:12:05.355 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:05.355 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:12:05.355 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:05.355 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:12:05.355 associated memzone info: size: 3.000122 MiB name: MP_evtpool_68079_0 00:12:05.355 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:12:05.356 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68079 00:12:05.356 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:12:05.356 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68079 00:12:05.356 element at address: 0x200018efde00 with size: 1.008179 MiB 00:12:05.356 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:05.356 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:12:05.356 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:05.356 element at address: 0x200018afde00 with size: 1.008179 MiB 00:12:05.356 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:05.356 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:12:05.356 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:05.356 element at address: 0x200000cff100 with size: 1.000549 MiB 00:12:05.356 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68079 00:12:05.356 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:12:05.356 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68079 00:12:05.356 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:12:05.356 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68079 00:12:05.356 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:12:05.356 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68079 00:12:05.356 element at address: 0x20000085b3c0 with size: 0.500549 MiB 00:12:05.356 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_68079 00:12:05.356 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:12:05.356 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68079 00:12:05.356 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:12:05.356 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:05.356 element at address: 0x200012c72280 with size: 0.500549 MiB 00:12:05.356 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:05.356 element at address: 0x20001967c440 with size: 0.250549 MiB 00:12:05.356 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:05.356 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:12:05.356 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_68079 00:12:05.356 element at address: 0x2000008df840 with size: 0.125549 MiB 00:12:05.356 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68079 00:12:05.356 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:12:05.356 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:05.356 element at address: 0x200028064140 with size: 0.023804 MiB 00:12:05.356 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:05.356 element at address: 0x2000008db600 with size: 0.016174 MiB 00:12:05.356 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68079 00:12:05.356 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:12:05.356 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:05.356 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:12:05.356 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68079 00:12:05.356 element at address: 0x2000105ffd80 with size: 0.000366 MiB 00:12:05.356 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_68079 00:12:05.356 element at address: 0x200007fff580 with size: 0.000366 MiB 00:12:05.356 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68079 00:12:05.356 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:12:05.356 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:05.356 07:22:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:05.356 07:22:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68079 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 68079 ']' 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 68079 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68079 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.356 killing process with pid 68079 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68079' 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 68079 00:12:05.356 07:22:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 68079 00:12:07.894 00:12:07.894 real 0m4.427s 00:12:07.894 user 0m4.254s 00:12:07.894 sys 0m0.694s 00:12:07.894 07:22:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.894 07:22:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 ************************************ 00:12:07.894 END TEST dpdk_mem_utility 00:12:07.894 ************************************ 00:12:07.894 07:22:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:07.894 07:22:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.894 07:22:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.894 07:22:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 ************************************ 00:12:07.894 START TEST event 00:12:07.894 ************************************ 00:12:07.894 07:22:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:08.153 * Looking for test storage... 00:12:08.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:08.153 07:22:11 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.153 07:22:11 event -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.153 07:22:11 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.153 07:22:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.153 07:22:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.153 07:22:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.153 07:22:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.153 07:22:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.153 07:22:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.153 07:22:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.153 07:22:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.153 07:22:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.153 07:22:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.153 07:22:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.153 07:22:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.153 07:22:11 event -- scripts/common.sh@344 -- # case "$op" in 00:12:08.153 07:22:11 event -- scripts/common.sh@345 -- # : 1 00:12:08.153 07:22:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.154 07:22:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.154 07:22:11 event -- scripts/common.sh@365 -- # decimal 1 00:12:08.154 07:22:11 event -- scripts/common.sh@353 -- # local d=1 00:12:08.154 07:22:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.154 07:22:11 event -- scripts/common.sh@355 -- # echo 1 00:12:08.154 07:22:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.154 07:22:11 event -- scripts/common.sh@366 -- # decimal 2 00:12:08.154 07:22:11 event -- scripts/common.sh@353 -- # local d=2 00:12:08.154 07:22:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.154 07:22:11 event -- scripts/common.sh@355 -- # echo 2 00:12:08.154 07:22:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.154 07:22:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.154 07:22:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.154 07:22:11 event -- scripts/common.sh@368 -- # return 0 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.154 --rc genhtml_branch_coverage=1 00:12:08.154 --rc genhtml_function_coverage=1 00:12:08.154 --rc genhtml_legend=1 00:12:08.154 --rc geninfo_all_blocks=1 00:12:08.154 --rc geninfo_unexecuted_blocks=1 00:12:08.154 00:12:08.154 ' 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.154 --rc genhtml_branch_coverage=1 00:12:08.154 --rc genhtml_function_coverage=1 00:12:08.154 --rc genhtml_legend=1 00:12:08.154 --rc geninfo_all_blocks=1 00:12:08.154 --rc geninfo_unexecuted_blocks=1 00:12:08.154 00:12:08.154 ' 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.154 --rc genhtml_branch_coverage=1 00:12:08.154 --rc genhtml_function_coverage=1 00:12:08.154 --rc genhtml_legend=1 00:12:08.154 --rc geninfo_all_blocks=1 00:12:08.154 --rc geninfo_unexecuted_blocks=1 00:12:08.154 00:12:08.154 ' 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.154 --rc genhtml_branch_coverage=1 00:12:08.154 --rc genhtml_function_coverage=1 00:12:08.154 --rc genhtml_legend=1 00:12:08.154 --rc geninfo_all_blocks=1 00:12:08.154 --rc geninfo_unexecuted_blocks=1 00:12:08.154 00:12:08.154 ' 00:12:08.154 07:22:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:08.154 07:22:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:08.154 07:22:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:08.154 07:22:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.154 07:22:11 event -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 ************************************ 00:12:08.154 START TEST event_perf 00:12:08.154 ************************************ 00:12:08.154 07:22:12 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:08.154 Running I/O for 1 seconds...[2024-11-20 07:22:12.057946] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:08.154 [2024-11-20 07:22:12.058589] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68191 ] 00:12:08.412 [2024-11-20 07:22:12.246185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.670 [2024-11-20 07:22:12.394073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.670 [2024-11-20 07:22:12.394281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.670 [2024-11-20 07:22:12.394379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.670 Running I/O for 1 seconds...[2024-11-20 07:22:12.394405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.046 00:12:10.046 lcore 0: 68126 00:12:10.046 lcore 1: 68123 00:12:10.046 lcore 2: 68126 00:12:10.046 lcore 3: 68129 00:12:10.046 done. 00:12:10.046 00:12:10.046 real 0m1.690s 00:12:10.046 user 0m4.454s 00:12:10.046 sys 0m0.131s 00:12:10.046 07:22:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.046 07:22:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:10.046 ************************************ 00:12:10.046 END TEST event_perf 00:12:10.046 ************************************ 00:12:10.046 07:22:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:10.046 07:22:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.046 07:22:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.046 07:22:13 event -- common/autotest_common.sh@10 -- # set +x 00:12:10.046 ************************************ 00:12:10.046 START TEST event_reactor 00:12:10.046 ************************************ 00:12:10.046 07:22:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:10.046 [2024-11-20 07:22:13.811659] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:10.046 [2024-11-20 07:22:13.811811] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68232 ] 00:12:10.305 [2024-11-20 07:22:13.992735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.305 [2024-11-20 07:22:14.145059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.681 test_start 00:12:11.681 oneshot 00:12:11.681 tick 100 00:12:11.681 tick 100 00:12:11.681 tick 250 00:12:11.681 tick 100 00:12:11.681 tick 100 00:12:11.681 tick 100 00:12:11.681 tick 250 00:12:11.681 tick 500 00:12:11.681 tick 100 00:12:11.682 tick 100 00:12:11.682 tick 250 00:12:11.682 tick 100 00:12:11.682 tick 100 00:12:11.682 test_end 00:12:11.682 00:12:11.682 real 0m1.641s 00:12:11.682 user 0m1.429s 00:12:11.682 sys 0m0.111s 00:12:11.682 07:22:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.682 07:22:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 ************************************ 00:12:11.682 END TEST event_reactor 00:12:11.682 ************************************ 00:12:11.682 07:22:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:11.682 07:22:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.682 07:22:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.682 07:22:15 event -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 ************************************ 00:12:11.682 START TEST event_reactor_perf 00:12:11.682 ************************************ 00:12:11.682 07:22:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:11.682 [2024-11-20 07:22:15.518329] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:11.682 [2024-11-20 07:22:15.518455] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68274 ] 00:12:11.939 [2024-11-20 07:22:15.700164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.939 [2024-11-20 07:22:15.836901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.316 test_start 00:12:13.316 test_end 00:12:13.316 Performance: 349344 events per second 00:12:13.316 00:12:13.316 real 0m1.640s 00:12:13.316 user 0m1.417s 00:12:13.316 sys 0m0.123s 00:12:13.316 07:22:17 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.316 07:22:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:13.316 ************************************ 00:12:13.316 END TEST event_reactor_perf 00:12:13.316 ************************************ 00:12:13.316 07:22:17 event -- event/event.sh@49 -- # uname -s 00:12:13.316 07:22:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:13.316 07:22:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:13.316 07:22:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.316 07:22:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.316 07:22:17 event -- common/autotest_common.sh@10 -- # set +x 00:12:13.316 ************************************ 00:12:13.316 START TEST event_scheduler 00:12:13.316 ************************************ 00:12:13.316 07:22:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:13.576 * Looking for test storage... 00:12:13.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.576 07:22:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.576 --rc genhtml_branch_coverage=1 00:12:13.576 --rc genhtml_function_coverage=1 00:12:13.576 --rc genhtml_legend=1 00:12:13.576 --rc geninfo_all_blocks=1 00:12:13.576 --rc geninfo_unexecuted_blocks=1 00:12:13.576 00:12:13.576 ' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.576 --rc genhtml_branch_coverage=1 00:12:13.576 --rc genhtml_function_coverage=1 00:12:13.576 --rc genhtml_legend=1 00:12:13.576 --rc geninfo_all_blocks=1 00:12:13.576 --rc geninfo_unexecuted_blocks=1 00:12:13.576 00:12:13.576 ' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.576 --rc genhtml_branch_coverage=1 00:12:13.576 --rc genhtml_function_coverage=1 00:12:13.576 --rc genhtml_legend=1 00:12:13.576 --rc geninfo_all_blocks=1 00:12:13.576 --rc geninfo_unexecuted_blocks=1 00:12:13.576 00:12:13.576 ' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.576 --rc genhtml_branch_coverage=1 00:12:13.576 --rc genhtml_function_coverage=1 00:12:13.576 --rc genhtml_legend=1 00:12:13.576 --rc geninfo_all_blocks=1 00:12:13.576 --rc geninfo_unexecuted_blocks=1 00:12:13.576 00:12:13.576 ' 00:12:13.576 07:22:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:13.576 07:22:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=68345 00:12:13.576 07:22:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:13.576 07:22:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:13.576 07:22:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 68345 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 68345 ']' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.576 07:22:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:13.576 [2024-11-20 07:22:17.492641] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:13.576 [2024-11-20 07:22:17.492833] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68345 ] 00:12:13.835 [2024-11-20 07:22:17.678763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.094 [2024-11-20 07:22:17.838916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.094 [2024-11-20 07:22:17.839112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.094 [2024-11-20 07:22:17.839215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.094 [2024-11-20 07:22:17.839252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:12:14.660 07:22:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.660 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.660 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.660 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.660 POWER: Cannot set governor of lcore 0 to performance 00:12:14.660 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.660 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.660 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.660 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.660 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:14.660 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:14.660 POWER: Unable to set Power Management Environment for lcore 0 00:12:14.660 [2024-11-20 07:22:18.351871] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:12:14.660 [2024-11-20 07:22:18.351896] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:12:14.660 [2024-11-20 07:22:18.351919] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:12:14.660 [2024-11-20 07:22:18.351945] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:14.660 [2024-11-20 07:22:18.351954] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:14.660 [2024-11-20 07:22:18.351965] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.660 07:22:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.660 07:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 [2024-11-20 07:22:18.744352] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:14.919 07:22:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:14.919 07:22:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.919 07:22:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 ************************************ 00:12:14.919 START TEST scheduler_create_thread 00:12:14.919 ************************************ 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 2 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 3 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 4 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 5 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 6 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.919 7 00:12:14.919 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.920 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:14.920 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.920 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.178 8 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.178 9 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.178 10 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.178 07:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:16.556 07:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.556 07:22:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:16.556 07:22:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:16.556 07:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.556 07:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.124 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.124 07:22:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:17.124 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.124 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:18.059 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.059 07:22:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:18.059 07:22:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:18.059 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.059 07:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.011 07:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.011 00:12:19.011 real 0m3.883s 00:12:19.011 user 0m0.025s 00:12:19.011 sys 0m0.013s 00:12:19.011 07:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.011 07:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.011 ************************************ 00:12:19.011 END TEST scheduler_create_thread 00:12:19.011 ************************************ 00:12:19.011 07:22:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:19.011 07:22:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 68345 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 68345 ']' 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 68345 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68345 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:19.011 killing process with pid 68345 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68345' 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 68345 00:12:19.011 07:22:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 68345 00:12:19.289 [2024-11-20 07:22:23.019700] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:20.727 00:12:20.727 real 0m7.065s 00:12:20.727 user 0m15.058s 00:12:20.727 sys 0m0.714s 00:12:20.727 07:22:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.727 07:22:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:20.727 ************************************ 00:12:20.727 END TEST event_scheduler 00:12:20.727 ************************************ 00:12:20.727 07:22:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:20.727 07:22:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:20.727 07:22:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.727 07:22:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.727 07:22:24 event -- common/autotest_common.sh@10 -- # set +x 00:12:20.727 ************************************ 00:12:20.727 START TEST app_repeat 00:12:20.727 ************************************ 00:12:20.727 07:22:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=68467 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:20.727 Process app_repeat pid: 68467 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68467' 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:20.727 spdk_app_start Round 0 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:20.727 07:22:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68467 /var/tmp/spdk-nbd.sock 00:12:20.727 07:22:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 68467 ']' 00:12:20.728 07:22:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:20.728 07:22:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:20.728 07:22:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:20.728 07:22:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.728 07:22:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:20.728 [2024-11-20 07:22:24.379459] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:20.728 [2024-11-20 07:22:24.379604] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68467 ] 00:12:20.728 [2024-11-20 07:22:24.568120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.987 [2024-11-20 07:22:24.710367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.987 [2024-11-20 07:22:24.710429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.555 07:22:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.555 07:22:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:21.555 07:22:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:21.815 Malloc0 00:12:21.815 07:22:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:22.075 Malloc1 00:12:22.075 07:22:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.075 07:22:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:22.335 /dev/nbd0 00:12:22.335 07:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.335 07:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.335 07:22:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:22.335 1+0 records in 00:12:22.335 1+0 records out 00:12:22.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359727 s, 11.4 MB/s 00:12:22.596 07:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.596 07:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:22.596 07:22:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.596 07:22:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.596 07:22:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:22.596 07:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.596 07:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.596 07:22:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:22.596 /dev/nbd1 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:22.856 1+0 records in 00:12:22.856 1+0 records out 00:12:22.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423257 s, 9.7 MB/s 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.856 07:22:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.856 07:22:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:23.115 { 00:12:23.115 "nbd_device": "/dev/nbd0", 00:12:23.115 "bdev_name": "Malloc0" 00:12:23.115 }, 00:12:23.115 { 00:12:23.115 "nbd_device": "/dev/nbd1", 00:12:23.115 "bdev_name": "Malloc1" 00:12:23.115 } 00:12:23.115 ]' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:23.115 { 00:12:23.115 "nbd_device": "/dev/nbd0", 00:12:23.115 "bdev_name": "Malloc0" 00:12:23.115 }, 00:12:23.115 { 00:12:23.115 "nbd_device": "/dev/nbd1", 00:12:23.115 "bdev_name": "Malloc1" 00:12:23.115 } 00:12:23.115 ]' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:23.115 /dev/nbd1' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:23.115 /dev/nbd1' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:23.115 256+0 records in 00:12:23.115 256+0 records out 00:12:23.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132702 s, 79.0 MB/s 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:23.115 256+0 records in 00:12:23.115 256+0 records out 00:12:23.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313732 s, 33.4 MB/s 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:23.115 256+0 records in 00:12:23.115 256+0 records out 00:12:23.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336516 s, 31.2 MB/s 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:23.115 07:22:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.116 07:22:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.376 07:22:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.634 07:22:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:23.894 07:22:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:23.894 07:22:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:24.465 07:22:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:25.845 [2024-11-20 07:22:29.685209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.104 [2024-11-20 07:22:29.857991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.105 [2024-11-20 07:22:29.857996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.364 [2024-11-20 07:22:30.154784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:26.364 [2024-11-20 07:22:30.154915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:27.304 spdk_app_start Round 1 00:12:27.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:27.304 07:22:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:27.304 07:22:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:27.304 07:22:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68467 /var/tmp/spdk-nbd.sock 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 68467 ']' 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.304 07:22:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:27.563 07:22:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.563 07:22:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:27.563 07:22:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:27.823 Malloc0 00:12:28.083 07:22:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:28.341 Malloc1 00:12:28.341 07:22:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.341 07:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:28.600 /dev/nbd0 00:12:28.600 07:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.600 07:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:28.600 1+0 records in 00:12:28.600 1+0 records out 00:12:28.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302707 s, 13.5 MB/s 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:28.600 07:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:28.600 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.601 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.601 07:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:28.860 /dev/nbd1 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:28.860 1+0 records in 00:12:28.860 1+0 records out 00:12:28.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415618 s, 9.9 MB/s 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:28.860 07:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.860 07:22:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.120 { 00:12:29.120 "nbd_device": "/dev/nbd0", 00:12:29.120 "bdev_name": "Malloc0" 00:12:29.120 }, 00:12:29.120 { 00:12:29.120 "nbd_device": "/dev/nbd1", 00:12:29.120 "bdev_name": "Malloc1" 00:12:29.120 } 00:12:29.120 ]' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.120 { 00:12:29.120 "nbd_device": "/dev/nbd0", 00:12:29.120 "bdev_name": "Malloc0" 00:12:29.120 }, 00:12:29.120 { 00:12:29.120 "nbd_device": "/dev/nbd1", 00:12:29.120 "bdev_name": "Malloc1" 00:12:29.120 } 00:12:29.120 ]' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.120 /dev/nbd1' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.120 /dev/nbd1' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:29.120 256+0 records in 00:12:29.120 256+0 records out 00:12:29.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013127 s, 79.9 MB/s 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:29.120 256+0 records in 00:12:29.120 256+0 records out 00:12:29.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299379 s, 35.0 MB/s 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:29.120 256+0 records in 00:12:29.120 256+0 records out 00:12:29.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306596 s, 34.2 MB/s 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:29.120 07:22:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.120 07:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.380 07:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.381 07:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.639 07:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:29.899 07:22:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:29.899 07:22:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:30.468 07:22:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:32.377 [2024-11-20 07:22:35.840725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:32.378 [2024-11-20 07:22:35.991082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.378 [2024-11-20 07:22:35.991090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.378 [2024-11-20 07:22:36.254969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:32.378 [2024-11-20 07:22:36.255064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:33.759 spdk_app_start Round 2 00:12:33.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:33.759 07:22:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:33.759 07:22:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:33.759 07:22:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68467 /var/tmp/spdk-nbd.sock 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 68467 ']' 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.759 07:22:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:33.760 07:22:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.760 07:22:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:33.760 07:22:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.019 Malloc0 00:12:34.020 07:22:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.280 Malloc1 00:12:34.280 07:22:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.280 07:22:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:34.540 /dev/nbd0 00:12:34.540 07:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.540 07:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.540 07:22:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.540 07:22:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:34.540 07:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.540 07:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.540 07:22:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:34.804 1+0 records in 00:12:34.804 1+0 records out 00:12:34.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495071 s, 8.3 MB/s 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.804 07:22:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:34.804 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.804 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.804 07:22:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:34.804 /dev/nbd1 00:12:34.804 07:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:35.082 1+0 records in 00:12:35.082 1+0 records out 00:12:35.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433267 s, 9.5 MB/s 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.082 07:22:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:35.082 { 00:12:35.082 "nbd_device": "/dev/nbd0", 00:12:35.082 "bdev_name": "Malloc0" 00:12:35.082 }, 00:12:35.082 { 00:12:35.082 "nbd_device": "/dev/nbd1", 00:12:35.082 "bdev_name": "Malloc1" 00:12:35.082 } 00:12:35.082 ]' 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:35.082 { 00:12:35.082 "nbd_device": "/dev/nbd0", 00:12:35.082 "bdev_name": "Malloc0" 00:12:35.082 }, 00:12:35.082 { 00:12:35.082 "nbd_device": "/dev/nbd1", 00:12:35.082 "bdev_name": "Malloc1" 00:12:35.082 } 00:12:35.082 ]' 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:35.082 /dev/nbd1' 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:35.082 /dev/nbd1' 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:35.082 07:22:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:35.082 07:22:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:35.342 256+0 records in 00:12:35.342 256+0 records out 00:12:35.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125327 s, 83.7 MB/s 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:35.342 256+0 records in 00:12:35.342 256+0 records out 00:12:35.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241835 s, 43.4 MB/s 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:35.342 256+0 records in 00:12:35.342 256+0 records out 00:12:35.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266398 s, 39.4 MB/s 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.342 07:22:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.601 07:22:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.860 07:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.119 07:22:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.119 07:22:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:36.688 07:22:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:38.067 [2024-11-20 07:22:41.777577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.067 [2024-11-20 07:22:41.929636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.067 [2024-11-20 07:22:41.929644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.327 [2024-11-20 07:22:42.202467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:38.327 [2024-11-20 07:22:42.202559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:39.709 07:22:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 68467 /var/tmp/spdk-nbd.sock 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 68467 ']' 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:39.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:39.709 07:22:43 event.app_repeat -- event/event.sh@39 -- # killprocess 68467 00:12:39.709 07:22:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 68467 ']' 00:12:39.710 07:22:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 68467 00:12:39.710 07:22:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:39.710 07:22:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.710 07:22:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68467 00:12:39.985 killing process with pid 68467 00:12:39.985 07:22:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.985 07:22:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.985 07:22:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68467' 00:12:39.985 07:22:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 68467 00:12:39.985 07:22:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 68467 00:12:41.365 spdk_app_start is called in Round 0. 00:12:41.365 Shutdown signal received, stop current app iteration 00:12:41.365 Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 reinitialization... 00:12:41.365 spdk_app_start is called in Round 1. 00:12:41.365 Shutdown signal received, stop current app iteration 00:12:41.365 Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 reinitialization... 00:12:41.365 spdk_app_start is called in Round 2. 00:12:41.365 Shutdown signal received, stop current app iteration 00:12:41.365 Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 reinitialization... 00:12:41.365 spdk_app_start is called in Round 3. 00:12:41.365 Shutdown signal received, stop current app iteration 00:12:41.365 ************************************ 00:12:41.365 END TEST app_repeat 00:12:41.365 ************************************ 00:12:41.365 07:22:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:41.365 07:22:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:41.365 00:12:41.365 real 0m20.635s 00:12:41.365 user 0m44.180s 00:12:41.365 sys 0m3.114s 00:12:41.365 07:22:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.365 07:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.365 07:22:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:41.365 07:22:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:41.365 07:22:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.365 07:22:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.365 07:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:12:41.365 ************************************ 00:12:41.365 START TEST cpu_locks 00:12:41.365 ************************************ 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:41.365 * Looking for test storage... 00:12:41.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.365 07:22:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.365 07:22:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:41.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.366 --rc genhtml_branch_coverage=1 00:12:41.366 --rc genhtml_function_coverage=1 00:12:41.366 --rc genhtml_legend=1 00:12:41.366 --rc geninfo_all_blocks=1 00:12:41.366 --rc geninfo_unexecuted_blocks=1 00:12:41.366 00:12:41.366 ' 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:41.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.366 --rc genhtml_branch_coverage=1 00:12:41.366 --rc genhtml_function_coverage=1 00:12:41.366 --rc genhtml_legend=1 00:12:41.366 --rc geninfo_all_blocks=1 00:12:41.366 --rc geninfo_unexecuted_blocks=1 00:12:41.366 00:12:41.366 ' 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:41.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.366 --rc genhtml_branch_coverage=1 00:12:41.366 --rc genhtml_function_coverage=1 00:12:41.366 --rc genhtml_legend=1 00:12:41.366 --rc geninfo_all_blocks=1 00:12:41.366 --rc geninfo_unexecuted_blocks=1 00:12:41.366 00:12:41.366 ' 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:41.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.366 --rc genhtml_branch_coverage=1 00:12:41.366 --rc genhtml_function_coverage=1 00:12:41.366 --rc genhtml_legend=1 00:12:41.366 --rc geninfo_all_blocks=1 00:12:41.366 --rc geninfo_unexecuted_blocks=1 00:12:41.366 00:12:41.366 ' 00:12:41.366 07:22:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:41.366 07:22:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:41.366 07:22:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:41.366 07:22:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.366 07:22:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.366 ************************************ 00:12:41.366 START TEST default_locks 00:12:41.366 ************************************ 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=68974 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 68974 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 68974 ']' 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.366 07:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.625 [2024-11-20 07:22:45.298437] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:41.625 [2024-11-20 07:22:45.298578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68974 ] 00:12:41.625 [2024-11-20 07:22:45.479596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.885 [2024-11-20 07:22:45.619215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.826 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.826 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:12:42.826 07:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 68974 00:12:42.826 07:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 68974 00:12:42.826 07:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 68974 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 68974 ']' 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 68974 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68974 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.086 killing process with pid 68974 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68974' 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 68974 00:12:43.086 07:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 68974 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 68974 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 68974 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 68974 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 68974 ']' 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:45.627 ERROR: process (pid: 68974) is no longer running 00:12:45.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (68974) - No such process 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:45.627 00:12:45.627 real 0m4.139s 00:12:45.627 user 0m4.050s 00:12:45.627 sys 0m0.621s 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.627 ************************************ 00:12:45.627 END TEST default_locks 00:12:45.627 ************************************ 00:12:45.627 07:22:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:45.627 07:22:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:45.627 07:22:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.627 07:22:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.627 07:22:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:45.627 ************************************ 00:12:45.627 START TEST default_locks_via_rpc 00:12:45.627 ************************************ 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69049 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 69049 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 69049 ']' 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.627 07:22:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.627 [2024-11-20 07:22:49.483634] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:45.627 [2024-11-20 07:22:49.483805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ] 00:12:45.887 [2024-11-20 07:22:49.656799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.887 [2024-11-20 07:22:49.802185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.270 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.270 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:47.270 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:47.270 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 69049 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 69049 00:12:47.271 07:22:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:47.271 07:22:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 69049 00:12:47.271 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 69049 ']' 00:12:47.271 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 69049 00:12:47.271 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69049 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.531 killing process with pid 69049 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69049' 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 69049 00:12:47.531 07:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 69049 00:12:50.070 00:12:50.070 real 0m4.472s 00:12:50.070 user 0m4.385s 00:12:50.070 sys 0m0.694s 00:12:50.070 07:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.070 07:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.070 ************************************ 00:12:50.070 END TEST default_locks_via_rpc 00:12:50.070 ************************************ 00:12:50.070 07:22:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:50.070 07:22:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.070 07:22:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.070 07:22:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:50.070 ************************************ 00:12:50.070 START TEST non_locking_app_on_locked_coremask 00:12:50.070 ************************************ 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69123 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 69123 /var/tmp/spdk.sock 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69123 ']' 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.070 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.071 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.071 07:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:50.331 [2024-11-20 07:22:54.018044] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:50.331 [2024-11-20 07:22:54.018204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69123 ] 00:12:50.331 [2024-11-20 07:22:54.179508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.591 [2024-11-20 07:22:54.312201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69145 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 69145 /var/tmp/spdk2.sock 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69145 ']' 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.530 07:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.530 [2024-11-20 07:22:55.435561] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:12:51.530 [2024-11-20 07:22:55.436150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69145 ] 00:12:51.788 [2024-11-20 07:22:55.622833] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:51.788 [2024-11-20 07:22:55.622918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.047 [2024-11-20 07:22:55.923841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.604 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.604 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:54.604 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 69123 00:12:54.604 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:54.604 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69123 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 69123 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 69123 ']' 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 69123 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.863 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69123 00:12:54.864 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.864 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.864 killing process with pid 69123 00:12:54.864 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69123' 00:12:54.864 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 69123 00:12:54.864 07:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 69123 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 69145 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 69145 ']' 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 69145 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69145 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69145' 00:13:00.141 killing process with pid 69145 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 69145 00:13:00.141 07:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 69145 00:13:02.682 00:13:02.682 real 0m12.237s 00:13:02.682 user 0m12.499s 00:13:02.682 sys 0m1.366s 00:13:02.682 07:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.682 07:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.682 ************************************ 00:13:02.682 END TEST non_locking_app_on_locked_coremask 00:13:02.682 ************************************ 00:13:02.682 07:23:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:02.682 07:23:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.682 07:23:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.682 07:23:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.682 ************************************ 00:13:02.682 START TEST locking_app_on_unlocked_coremask 00:13:02.682 ************************************ 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69293 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 69293 /var/tmp/spdk.sock 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69293 ']' 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.682 07:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.682 [2024-11-20 07:23:06.324855] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:02.682 [2024-11-20 07:23:06.325236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69293 ] 00:13:02.682 [2024-11-20 07:23:06.524015] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:02.682 [2024-11-20 07:23:06.524074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.942 [2024-11-20 07:23:06.661587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69315 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 69315 /var/tmp/spdk2.sock 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69315 ']' 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:03.879 07:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:03.879 [2024-11-20 07:23:07.742158] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:03.879 [2024-11-20 07:23:07.742306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69315 ] 00:13:04.140 [2024-11-20 07:23:07.927554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.400 [2024-11-20 07:23:08.206559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.942 07:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.942 07:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:06.942 07:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 69315 00:13:06.942 07:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69315 00:13:06.942 07:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 69293 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 69293 ']' 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 69293 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69293 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.511 killing process with pid 69293 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69293' 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 69293 00:13:07.511 07:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 69293 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 69315 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 69315 ']' 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 69315 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69315 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.782 killing process with pid 69315 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69315' 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 69315 00:13:12.782 07:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 69315 00:13:15.322 00:13:15.322 real 0m12.793s 00:13:15.322 user 0m12.969s 00:13:15.322 sys 0m1.587s 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:15.322 ************************************ 00:13:15.322 END TEST locking_app_on_unlocked_coremask 00:13:15.322 ************************************ 00:13:15.322 07:23:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:15.322 07:23:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.322 07:23:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.322 07:23:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:15.322 ************************************ 00:13:15.322 START TEST locking_app_on_locked_coremask 00:13:15.322 ************************************ 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69474 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 69474 /var/tmp/spdk.sock 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69474 ']' 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.322 07:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:15.322 [2024-11-20 07:23:19.177023] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:15.322 [2024-11-20 07:23:19.177229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69474 ] 00:13:15.581 [2024-11-20 07:23:19.355091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.581 [2024-11-20 07:23:19.494873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69490 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69490 /var/tmp/spdk2.sock 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 69490 /var/tmp/spdk2.sock 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 69490 /var/tmp/spdk2.sock 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 69490 ']' 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:16.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.961 07:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:16.961 [2024-11-20 07:23:20.542364] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:16.961 [2024-11-20 07:23:20.542976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69490 ] 00:13:16.961 [2024-11-20 07:23:20.726826] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69474 has claimed it. 00:13:16.961 [2024-11-20 07:23:20.726905] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:17.529 ERROR: process (pid: 69490) is no longer running 00:13:17.529 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (69490) - No such process 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 69474 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69474 00:13:17.529 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 69474 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 69474 ']' 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 69474 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69474 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69474' 00:13:17.788 killing process with pid 69474 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 69474 00:13:17.788 07:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 69474 00:13:20.368 00:13:20.368 real 0m4.942s 00:13:20.368 user 0m5.051s 00:13:20.368 sys 0m0.817s 00:13:20.368 07:23:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.368 07:23:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:20.368 ************************************ 00:13:20.368 END TEST locking_app_on_locked_coremask 00:13:20.368 ************************************ 00:13:20.368 07:23:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:20.368 07:23:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.368 07:23:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.368 07:23:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:20.368 ************************************ 00:13:20.368 START TEST locking_overlapped_coremask 00:13:20.368 ************************************ 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69560 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 69560 /var/tmp/spdk.sock 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 69560 ']' 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.368 07:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:20.368 [2024-11-20 07:23:24.185584] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:20.368 [2024-11-20 07:23:24.185788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69560 ] 00:13:20.627 [2024-11-20 07:23:24.346076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.627 [2024-11-20 07:23:24.504263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.627 [2024-11-20 07:23:24.504352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.627 [2024-11-20 07:23:24.504388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69583 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69583 /var/tmp/spdk2.sock 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 69583 /var/tmp/spdk2.sock 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 69583 /var/tmp/spdk2.sock 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 69583 ']' 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:22.006 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.007 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:22.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:22.007 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.007 07:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:22.007 [2024-11-20 07:23:25.577821] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:22.007 [2024-11-20 07:23:25.578413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69583 ] 00:13:22.007 [2024-11-20 07:23:25.750813] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69560 has claimed it. 00:13:22.007 [2024-11-20 07:23:25.750900] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:22.576 ERROR: process (pid: 69583) is no longer running 00:13:22.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (69583) - No such process 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 69560 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 69560 ']' 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 69560 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69560 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69560' 00:13:22.576 killing process with pid 69560 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 69560 00:13:22.576 07:23:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 69560 00:13:25.116 00:13:25.116 real 0m4.569s 00:13:25.116 user 0m12.483s 00:13:25.116 sys 0m0.635s 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 ************************************ 00:13:25.116 END TEST locking_overlapped_coremask 00:13:25.116 ************************************ 00:13:25.116 07:23:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:25.116 07:23:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.116 07:23:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.116 07:23:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 ************************************ 00:13:25.116 START TEST locking_overlapped_coremask_via_rpc 00:13:25.116 ************************************ 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69646 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 69646 /var/tmp/spdk.sock 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 69646 ']' 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.116 07:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 [2024-11-20 07:23:28.824129] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:25.116 [2024-11-20 07:23:28.824317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69646 ] 00:13:25.116 [2024-11-20 07:23:29.001009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:25.117 [2024-11-20 07:23:29.001147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.376 [2024-11-20 07:23:29.128381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.376 [2024-11-20 07:23:29.128528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.376 [2024-11-20 07:23:29.128565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.312 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.312 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:26.312 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69665 00:13:26.312 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 69665 /var/tmp/spdk2.sock 00:13:26.312 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 69665 ']' 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:26.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.313 07:23:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.313 [2024-11-20 07:23:30.150937] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:26.313 [2024-11-20 07:23:30.151164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69665 ] 00:13:26.572 [2024-11-20 07:23:30.322434] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:26.572 [2024-11-20 07:23:30.322494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.832 [2024-11-20 07:23:30.585151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.832 [2024-11-20 07:23:30.588878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.832 [2024-11-20 07:23:30.588917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-11-20 07:23:32.757925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69646 has claimed it. 00:13:29.394 request: 00:13:29.394 { 00:13:29.394 "method": "framework_enable_cpumask_locks", 00:13:29.394 "req_id": 1 00:13:29.394 } 00:13:29.394 Got JSON-RPC error response 00:13:29.394 response: 00:13:29.394 { 00:13:29.394 "code": -32603, 00:13:29.394 "message": "Failed to claim CPU core: 2" 00:13:29.394 } 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 69646 /var/tmp/spdk.sock 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 69646 ']' 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 69665 /var/tmp/spdk2.sock 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 69665 ']' 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:29.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.394 07:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:29.394 00:13:29.394 real 0m4.448s 00:13:29.394 user 0m1.252s 00:13:29.394 sys 0m0.209s 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.394 07:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 ************************************ 00:13:29.394 END TEST locking_overlapped_coremask_via_rpc 00:13:29.394 ************************************ 00:13:29.394 07:23:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:29.394 07:23:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69646 ]] 00:13:29.394 07:23:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69646 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 69646 ']' 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 69646 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69646 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69646' 00:13:29.394 killing process with pid 69646 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 69646 00:13:29.394 07:23:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 69646 00:13:31.924 07:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69665 ]] 00:13:31.924 07:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69665 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 69665 ']' 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 69665 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69665 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69665' 00:13:31.925 killing process with pid 69665 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 69665 00:13:31.925 07:23:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 69665 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:34.461 Process with pid 69646 is not found 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69646 ]] 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69646 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 69646 ']' 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 69646 00:13:34.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (69646) - No such process 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 69646 is not found' 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69665 ]] 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69665 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 69665 ']' 00:13:34.461 Process with pid 69665 is not found 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 69665 00:13:34.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (69665) - No such process 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 69665 is not found' 00:13:34.461 07:23:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:34.461 ************************************ 00:13:34.461 END TEST cpu_locks 00:13:34.461 ************************************ 00:13:34.461 00:13:34.461 real 0m53.169s 00:13:34.461 user 1m29.204s 00:13:34.461 sys 0m7.241s 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.461 07:23:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 ************************************ 00:13:34.461 END TEST event 00:13:34.461 ************************************ 00:13:34.461 00:13:34.461 real 1m26.452s 00:13:34.461 user 2m35.963s 00:13:34.461 sys 0m11.860s 00:13:34.461 07:23:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.461 07:23:38 event -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 07:23:38 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:34.461 07:23:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.461 07:23:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.461 07:23:38 -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 ************************************ 00:13:34.461 START TEST thread 00:13:34.461 ************************************ 00:13:34.461 07:23:38 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:34.719 * Looking for test storage... 00:13:34.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:34.719 07:23:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.720 07:23:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.720 07:23:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.720 07:23:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.720 07:23:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.720 07:23:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.720 07:23:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.720 07:23:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.720 07:23:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.720 07:23:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.720 07:23:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.720 07:23:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.720 07:23:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:34.720 07:23:38 thread -- scripts/common.sh@345 -- # : 1 00:13:34.720 07:23:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.720 07:23:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.720 07:23:38 thread -- scripts/common.sh@365 -- # decimal 1 00:13:34.720 07:23:38 thread -- scripts/common.sh@353 -- # local d=1 00:13:34.720 07:23:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.720 07:23:38 thread -- scripts/common.sh@355 -- # echo 1 00:13:34.720 07:23:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.720 07:23:38 thread -- scripts/common.sh@366 -- # decimal 2 00:13:34.720 07:23:38 thread -- scripts/common.sh@353 -- # local d=2 00:13:34.720 07:23:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.720 07:23:38 thread -- scripts/common.sh@355 -- # echo 2 00:13:34.720 07:23:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.720 07:23:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.720 07:23:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.720 07:23:38 thread -- scripts/common.sh@368 -- # return 0 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.720 --rc genhtml_branch_coverage=1 00:13:34.720 --rc genhtml_function_coverage=1 00:13:34.720 --rc genhtml_legend=1 00:13:34.720 --rc geninfo_all_blocks=1 00:13:34.720 --rc geninfo_unexecuted_blocks=1 00:13:34.720 00:13:34.720 ' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.720 --rc genhtml_branch_coverage=1 00:13:34.720 --rc genhtml_function_coverage=1 00:13:34.720 --rc genhtml_legend=1 00:13:34.720 --rc geninfo_all_blocks=1 00:13:34.720 --rc geninfo_unexecuted_blocks=1 00:13:34.720 00:13:34.720 ' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.720 --rc genhtml_branch_coverage=1 00:13:34.720 --rc genhtml_function_coverage=1 00:13:34.720 --rc genhtml_legend=1 00:13:34.720 --rc geninfo_all_blocks=1 00:13:34.720 --rc geninfo_unexecuted_blocks=1 00:13:34.720 00:13:34.720 ' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.720 --rc genhtml_branch_coverage=1 00:13:34.720 --rc genhtml_function_coverage=1 00:13:34.720 --rc genhtml_legend=1 00:13:34.720 --rc geninfo_all_blocks=1 00:13:34.720 --rc geninfo_unexecuted_blocks=1 00:13:34.720 00:13:34.720 ' 00:13:34.720 07:23:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.720 07:23:38 thread -- common/autotest_common.sh@10 -- # set +x 00:13:34.720 ************************************ 00:13:34.720 START TEST thread_poller_perf 00:13:34.720 ************************************ 00:13:34.720 07:23:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:34.720 [2024-11-20 07:23:38.567554] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:34.720 [2024-11-20 07:23:38.567768] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69861 ] 00:13:34.994 [2024-11-20 07:23:38.748412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.994 [2024-11-20 07:23:38.877534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.994 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:36.400 [2024-11-20T07:23:40.333Z] ====================================== 00:13:36.400 [2024-11-20T07:23:40.333Z] busy:2302419262 (cyc) 00:13:36.400 [2024-11-20T07:23:40.333Z] total_run_count: 394000 00:13:36.400 [2024-11-20T07:23:40.333Z] tsc_hz: 2290000000 (cyc) 00:13:36.400 [2024-11-20T07:23:40.333Z] ====================================== 00:13:36.400 [2024-11-20T07:23:40.333Z] poller_cost: 5843 (cyc), 2551 (nsec) 00:13:36.400 00:13:36.400 real 0m1.593s 00:13:36.400 user 0m1.386s 00:13:36.400 sys 0m0.106s 00:13:36.400 07:23:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.400 07:23:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:36.400 ************************************ 00:13:36.400 END TEST thread_poller_perf 00:13:36.400 ************************************ 00:13:36.400 07:23:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:36.400 07:23:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:36.400 07:23:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.400 07:23:40 thread -- common/autotest_common.sh@10 -- # set +x 00:13:36.400 ************************************ 00:13:36.400 START TEST thread_poller_perf 00:13:36.400 ************************************ 00:13:36.400 07:23:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:36.400 [2024-11-20 07:23:40.220351] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:36.400 [2024-11-20 07:23:40.220465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69897 ] 00:13:36.660 [2024-11-20 07:23:40.391719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.660 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:36.660 [2024-11-20 07:23:40.518500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.040 [2024-11-20T07:23:41.973Z] ====================================== 00:13:38.040 [2024-11-20T07:23:41.973Z] busy:2293694476 (cyc) 00:13:38.040 [2024-11-20T07:23:41.973Z] total_run_count: 5294000 00:13:38.040 [2024-11-20T07:23:41.973Z] tsc_hz: 2290000000 (cyc) 00:13:38.040 [2024-11-20T07:23:41.973Z] ====================================== 00:13:38.040 [2024-11-20T07:23:41.973Z] poller_cost: 433 (cyc), 189 (nsec) 00:13:38.040 00:13:38.040 real 0m1.574s 00:13:38.040 user 0m1.365s 00:13:38.040 sys 0m0.107s 00:13:38.040 07:23:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.040 07:23:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 ************************************ 00:13:38.040 END TEST thread_poller_perf 00:13:38.040 ************************************ 00:13:38.040 07:23:41 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:13:38.040 07:23:41 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:38.040 07:23:41 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.040 07:23:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.040 07:23:41 thread -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 ************************************ 00:13:38.040 START TEST thread_spdk_lock 00:13:38.040 ************************************ 00:13:38.040 07:23:41 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:38.040 [2024-11-20 07:23:41.861188] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:38.040 [2024-11-20 07:23:41.861315] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69935 ] 00:13:38.299 [2024-11-20 07:23:42.034436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.299 [2024-11-20 07:23:42.175796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.299 [2024-11-20 07:23:42.175838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.868 [2024-11-20 07:23:42.701828] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:38.868 [2024-11-20 07:23:42.701903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:13:38.868 [2024-11-20 07:23:42.701915] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x55ae23dd29c0 00:13:38.868 [2024-11-20 07:23:42.708596] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:38.868 [2024-11-20 07:23:42.708697] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:38.868 [2024-11-20 07:23:42.708722] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:39.127 Starting test contend 00:13:39.127 Worker Delay Wait us Hold us Total us 00:13:39.127 0 3 141868 191188 333057 00:13:39.127 1 5 72331 293473 365804 00:13:39.127 PASS test contend 00:13:39.127 Starting test hold_by_poller 00:13:39.127 PASS test hold_by_poller 00:13:39.127 Starting test hold_by_message 00:13:39.127 PASS test hold_by_message 00:13:39.127 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:13:39.127 100014 assertions passed 00:13:39.127 0 assertions failed 00:13:39.127 00:13:39.127 real 0m1.140s 00:13:39.127 user 0m1.471s 00:13:39.127 sys 0m0.106s 00:13:39.127 ************************************ 00:13:39.127 END TEST thread_spdk_lock 00:13:39.127 ************************************ 00:13:39.127 07:23:42 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.127 07:23:42 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:13:39.127 ************************************ 00:13:39.127 END TEST thread 00:13:39.127 ************************************ 00:13:39.127 00:13:39.127 real 0m4.726s 00:13:39.127 user 0m4.388s 00:13:39.127 sys 0m0.594s 00:13:39.127 07:23:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.127 07:23:43 thread -- common/autotest_common.sh@10 -- # set +x 00:13:39.386 07:23:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:13:39.386 07:23:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:39.386 07:23:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.386 07:23:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.386 07:23:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.386 ************************************ 00:13:39.386 START TEST app_cmdline 00:13:39.386 ************************************ 00:13:39.386 07:23:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:39.386 * Looking for test storage... 00:13:39.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:39.386 07:23:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:39.386 07:23:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:13:39.386 07:23:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:39.386 07:23:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.386 07:23:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.387 07:23:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.387 --rc genhtml_branch_coverage=1 00:13:39.387 --rc genhtml_function_coverage=1 00:13:39.387 --rc genhtml_legend=1 00:13:39.387 --rc geninfo_all_blocks=1 00:13:39.387 --rc geninfo_unexecuted_blocks=1 00:13:39.387 00:13:39.387 ' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.387 --rc genhtml_branch_coverage=1 00:13:39.387 --rc genhtml_function_coverage=1 00:13:39.387 --rc genhtml_legend=1 00:13:39.387 --rc geninfo_all_blocks=1 00:13:39.387 --rc geninfo_unexecuted_blocks=1 00:13:39.387 00:13:39.387 ' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.387 --rc genhtml_branch_coverage=1 00:13:39.387 --rc genhtml_function_coverage=1 00:13:39.387 --rc genhtml_legend=1 00:13:39.387 --rc geninfo_all_blocks=1 00:13:39.387 --rc geninfo_unexecuted_blocks=1 00:13:39.387 00:13:39.387 ' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.387 --rc genhtml_branch_coverage=1 00:13:39.387 --rc genhtml_function_coverage=1 00:13:39.387 --rc genhtml_legend=1 00:13:39.387 --rc geninfo_all_blocks=1 00:13:39.387 --rc geninfo_unexecuted_blocks=1 00:13:39.387 00:13:39.387 ' 00:13:39.387 07:23:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:39.387 07:23:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=70018 00:13:39.387 07:23:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:39.387 07:23:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 70018 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 70018 ']' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.387 07:23:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:39.647 [2024-11-20 07:23:43.357993] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:39.647 [2024-11-20 07:23:43.358157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70018 ] 00:13:39.647 [2024-11-20 07:23:43.532113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.959 [2024-11-20 07:23:43.653891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:40.897 { 00:13:40.897 "version": "SPDK v25.01-pre git sha1 4c583db59", 00:13:40.897 "fields": { 00:13:40.897 "major": 25, 00:13:40.897 "minor": 1, 00:13:40.897 "patch": 0, 00:13:40.897 "suffix": "-pre", 00:13:40.897 "commit": "4c583db59" 00:13:40.897 } 00:13:40.897 } 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:40.897 07:23:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:40.897 07:23:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:41.157 request: 00:13:41.157 { 00:13:41.157 "method": "env_dpdk_get_mem_stats", 00:13:41.157 "req_id": 1 00:13:41.157 } 00:13:41.157 Got JSON-RPC error response 00:13:41.157 response: 00:13:41.157 { 00:13:41.157 "code": -32601, 00:13:41.157 "message": "Method not found" 00:13:41.157 } 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.157 07:23:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 70018 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 70018 ']' 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 70018 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.157 07:23:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70018 00:13:41.157 killing process with pid 70018 00:13:41.157 07:23:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.157 07:23:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.157 07:23:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70018' 00:13:41.157 07:23:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 70018 00:13:41.157 07:23:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 70018 00:13:43.696 00:13:43.696 real 0m4.244s 00:13:43.696 user 0m4.312s 00:13:43.696 sys 0m0.677s 00:13:43.696 07:23:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.696 07:23:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:43.696 ************************************ 00:13:43.696 END TEST app_cmdline 00:13:43.696 ************************************ 00:13:43.696 07:23:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:43.696 07:23:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.696 07:23:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.696 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.696 ************************************ 00:13:43.696 START TEST version 00:13:43.696 ************************************ 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:43.696 * Looking for test storage... 00:13:43.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:43.696 07:23:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.696 07:23:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.696 07:23:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.696 07:23:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.696 07:23:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.696 07:23:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.696 07:23:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.696 07:23:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.696 07:23:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.696 07:23:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.696 07:23:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.696 07:23:47 version -- scripts/common.sh@344 -- # case "$op" in 00:13:43.696 07:23:47 version -- scripts/common.sh@345 -- # : 1 00:13:43.696 07:23:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.696 07:23:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.696 07:23:47 version -- scripts/common.sh@365 -- # decimal 1 00:13:43.696 07:23:47 version -- scripts/common.sh@353 -- # local d=1 00:13:43.696 07:23:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.696 07:23:47 version -- scripts/common.sh@355 -- # echo 1 00:13:43.696 07:23:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.696 07:23:47 version -- scripts/common.sh@366 -- # decimal 2 00:13:43.696 07:23:47 version -- scripts/common.sh@353 -- # local d=2 00:13:43.696 07:23:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.696 07:23:47 version -- scripts/common.sh@355 -- # echo 2 00:13:43.696 07:23:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.696 07:23:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.696 07:23:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.696 07:23:47 version -- scripts/common.sh@368 -- # return 0 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.696 --rc genhtml_branch_coverage=1 00:13:43.696 --rc genhtml_function_coverage=1 00:13:43.696 --rc genhtml_legend=1 00:13:43.696 --rc geninfo_all_blocks=1 00:13:43.696 --rc geninfo_unexecuted_blocks=1 00:13:43.696 00:13:43.696 ' 00:13:43.696 07:23:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.696 --rc genhtml_branch_coverage=1 00:13:43.697 --rc genhtml_function_coverage=1 00:13:43.697 --rc genhtml_legend=1 00:13:43.697 --rc geninfo_all_blocks=1 00:13:43.697 --rc geninfo_unexecuted_blocks=1 00:13:43.697 00:13:43.697 ' 00:13:43.697 07:23:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:43.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.697 --rc genhtml_branch_coverage=1 00:13:43.697 --rc genhtml_function_coverage=1 00:13:43.697 --rc genhtml_legend=1 00:13:43.697 --rc geninfo_all_blocks=1 00:13:43.697 --rc geninfo_unexecuted_blocks=1 00:13:43.697 00:13:43.697 ' 00:13:43.697 07:23:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:43.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.697 --rc genhtml_branch_coverage=1 00:13:43.697 --rc genhtml_function_coverage=1 00:13:43.697 --rc genhtml_legend=1 00:13:43.697 --rc geninfo_all_blocks=1 00:13:43.697 --rc geninfo_unexecuted_blocks=1 00:13:43.697 00:13:43.697 ' 00:13:43.697 07:23:47 version -- app/version.sh@17 -- # get_header_version major 00:13:43.697 07:23:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # tr -d '"' 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # cut -f2 00:13:43.697 07:23:47 version -- app/version.sh@17 -- # major=25 00:13:43.697 07:23:47 version -- app/version.sh@18 -- # get_header_version minor 00:13:43.697 07:23:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # cut -f2 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # tr -d '"' 00:13:43.697 07:23:47 version -- app/version.sh@18 -- # minor=1 00:13:43.697 07:23:47 version -- app/version.sh@19 -- # get_header_version patch 00:13:43.697 07:23:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # tr -d '"' 00:13:43.697 07:23:47 version -- app/version.sh@14 -- # cut -f2 00:13:43.957 07:23:47 version -- app/version.sh@19 -- # patch=0 00:13:43.957 07:23:47 version -- app/version.sh@20 -- # get_header_version suffix 00:13:43.957 07:23:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:43.957 07:23:47 version -- app/version.sh@14 -- # cut -f2 00:13:43.957 07:23:47 version -- app/version.sh@14 -- # tr -d '"' 00:13:43.957 07:23:47 version -- app/version.sh@20 -- # suffix=-pre 00:13:43.957 07:23:47 version -- app/version.sh@22 -- # version=25.1 00:13:43.957 07:23:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:43.957 07:23:47 version -- app/version.sh@28 -- # version=25.1rc0 00:13:43.957 07:23:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:43.957 07:23:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:43.957 07:23:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:43.957 07:23:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:43.957 00:13:43.957 real 0m0.313s 00:13:43.957 user 0m0.194s 00:13:43.957 sys 0m0.185s 00:13:43.957 07:23:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.957 07:23:47 version -- common/autotest_common.sh@10 -- # set +x 00:13:43.957 ************************************ 00:13:43.957 END TEST version 00:13:43.957 ************************************ 00:13:43.957 07:23:47 -- spdk/autotest.sh@179 -- # '[' 1 -eq 1 ']' 00:13:43.957 07:23:47 -- spdk/autotest.sh@180 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:43.957 07:23:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.957 07:23:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.957 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.957 ************************************ 00:13:43.957 START TEST blockdev_general 00:13:43.957 ************************************ 00:13:43.957 07:23:47 blockdev_general -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:43.957 * Looking for test storage... 00:13:43.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:43.957 07:23:47 blockdev_general -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:43.957 07:23:47 blockdev_general -- common/autotest_common.sh@1693 -- # lcov --version 00:13:43.957 07:23:47 blockdev_general -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:44.217 07:23:47 blockdev_general -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@344 -- # case "$op" in 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@345 -- # : 1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@365 -- # decimal 1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@353 -- # local d=1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@355 -- # echo 1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@366 -- # decimal 2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@353 -- # local d=2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@355 -- # echo 2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.217 07:23:47 blockdev_general -- scripts/common.sh@368 -- # return 0 00:13:44.217 07:23:47 blockdev_general -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.217 07:23:47 blockdev_general -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:44.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.217 --rc genhtml_branch_coverage=1 00:13:44.217 --rc genhtml_function_coverage=1 00:13:44.217 --rc genhtml_legend=1 00:13:44.218 --rc geninfo_all_blocks=1 00:13:44.218 --rc geninfo_unexecuted_blocks=1 00:13:44.218 00:13:44.218 ' 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:44.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.218 --rc genhtml_branch_coverage=1 00:13:44.218 --rc genhtml_function_coverage=1 00:13:44.218 --rc genhtml_legend=1 00:13:44.218 --rc geninfo_all_blocks=1 00:13:44.218 --rc geninfo_unexecuted_blocks=1 00:13:44.218 00:13:44.218 ' 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:44.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.218 --rc genhtml_branch_coverage=1 00:13:44.218 --rc genhtml_function_coverage=1 00:13:44.218 --rc genhtml_legend=1 00:13:44.218 --rc geninfo_all_blocks=1 00:13:44.218 --rc geninfo_unexecuted_blocks=1 00:13:44.218 00:13:44.218 ' 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:44.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.218 --rc genhtml_branch_coverage=1 00:13:44.218 --rc genhtml_function_coverage=1 00:13:44.218 --rc genhtml_legend=1 00:13:44.218 --rc geninfo_all_blocks=1 00:13:44.218 --rc geninfo_unexecuted_blocks=1 00:13:44.218 00:13:44.218 ' 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:44.218 07:23:47 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70205 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:44.218 07:23:47 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 70205 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@835 -- # '[' -z 70205 ']' 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.218 07:23:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:44.218 [2024-11-20 07:23:48.045913] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:44.218 [2024-11-20 07:23:48.046034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70205 ] 00:13:44.478 [2024-11-20 07:23:48.221881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.478 [2024-11-20 07:23:48.347155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.047 07:23:48 blockdev_general -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.047 07:23:48 blockdev_general -- common/autotest_common.sh@868 -- # return 0 00:13:45.047 07:23:48 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:45.047 07:23:48 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:13:45.047 07:23:48 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:45.047 07:23:48 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.047 07:23:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.034 [2024-11-20 07:23:49.817115] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.034 [2024-11-20 07:23:49.817187] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.034 00:13:46.034 [2024-11-20 07:23:49.825091] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.034 [2024-11-20 07:23:49.825150] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.034 00:13:46.034 Malloc0 00:13:46.034 Malloc1 00:13:46.034 Malloc2 00:13:46.293 Malloc3 00:13:46.293 Malloc4 00:13:46.293 Malloc5 00:13:46.293 Malloc6 00:13:46.293 Malloc7 00:13:46.553 Malloc8 00:13:46.553 Malloc9 00:13:46.553 [2024-11-20 07:23:50.273143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:46.553 [2024-11-20 07:23:50.273230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.553 [2024-11-20 07:23:50.273254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:13:46.553 [2024-11-20 07:23:50.273264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.553 [2024-11-20 07:23:50.275427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.553 [2024-11-20 07:23:50.275470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:46.553 TestPT 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:46.553 5000+0 records in 00:13:46.553 5000+0 records out 00:13:46.553 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0222559 s, 460 MB/s 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 AIO0 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.553 07:23:50 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:46.553 07:23:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.814 07:23:50 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.814 07:23:50 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:46.814 07:23:50 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:46.815 07:23:50 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3aabe122-b104-4b5b-a388-0f2490b95399"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3aabe122-b104-4b5b-a388-0f2490b95399",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9ea473f3-895b-5836-9ec9-6d4d60e0ca46"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9ea473f3-895b-5836-9ec9-6d4d60e0ca46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd854861-e64c-5c8b-a652-41bed28b9b77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd854861-e64c-5c8b-a652-41bed28b9b77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "52886fca-d8ce-5303-9c2a-1cde970445ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52886fca-d8ce-5303-9c2a-1cde970445ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "411226be-cc31-5b22-b701-4b296299269f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "411226be-cc31-5b22-b701-4b296299269f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "66fefa64-1d62-5483-970c-63a822178b07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66fefa64-1d62-5483-970c-63a822178b07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c6d1fa1-1a7e-5339-828b-1363428a081b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c6d1fa1-1a7e-5339-828b-1363428a081b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "30f2a774-6596-5e45-8a56-201b5f6583af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30f2a774-6596-5e45-8a56-201b5f6583af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1aa5f263-14fd-5940-adc8-9874421e8e4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1aa5f263-14fd-5940-adc8-9874421e8e4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "85f1923f-8296-5445-8da5-26477e425ba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "85f1923f-8296-5445-8da5-26477e425ba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3197c940-9229-5fc0-9aae-17e522d4bcdc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3197c940-9229-5fc0-9aae-17e522d4bcdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2ed94ade-89da-442f-8f12-c5da833b6fa1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a252f256-4482-4d83-ab6f-363be57fad05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2f812059-8bcb-403a-af6f-9b03c3887a90",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6935b68c-805b-41e4-967f-05d9466dad63"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ad1855be-c83d-4373-9ef2-ee763c8ba7d3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "126eede7-79cf-4296-af37-b1d587b3ebb8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4f72518c-b64e-4741-9804-978e28f1fcf5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1ca2ff6c-d4c8-4533-9307-5e6b28c6f293",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "424ff6d6-b34f-4fb0-809f-91b23f480a39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "947e7821-a017-4762-b3d7-6e9e9d2821cd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "947e7821-a017-4762-b3d7-6e9e9d2821cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:46.815 07:23:50 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:46.815 07:23:50 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:13:46.815 07:23:50 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:46.815 07:23:50 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 70205 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@954 -- # '[' -z 70205 ']' 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@958 -- # kill -0 70205 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@959 -- # uname 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70205 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.815 killing process with pid 70205 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70205' 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@973 -- # kill 70205 00:13:46.815 07:23:50 blockdev_general -- common/autotest_common.sh@978 -- # wait 70205 00:13:51.117 07:23:54 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:51.117 07:23:54 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:51.117 07:23:54 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.117 07:23:54 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.117 07:23:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:51.117 ************************************ 00:13:51.117 START TEST bdev_hello_world 00:13:51.117 ************************************ 00:13:51.117 07:23:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:51.117 [2024-11-20 07:23:54.441449] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:51.117 [2024-11-20 07:23:54.441692] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70295 ] 00:13:51.117 [2024-11-20 07:23:54.635594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.117 [2024-11-20 07:23:54.773510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.384 [2024-11-20 07:23:55.223129] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.384 [2024-11-20 07:23:55.223221] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.384 [2024-11-20 07:23:55.231059] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.384 [2024-11-20 07:23:55.231128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.384 [2024-11-20 07:23:55.239047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:51.384 [2024-11-20 07:23:55.239126] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:51.384 [2024-11-20 07:23:55.239159] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:51.644 [2024-11-20 07:23:55.448460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:51.644 [2024-11-20 07:23:55.448558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.644 [2024-11-20 07:23:55.448587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:13:51.644 [2024-11-20 07:23:55.448598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.644 [2024-11-20 07:23:55.451105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.644 [2024-11-20 07:23:55.451167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:51.903 [2024-11-20 07:23:55.780489] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:51.903 [2024-11-20 07:23:55.780575] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:51.903 [2024-11-20 07:23:55.780634] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:51.903 [2024-11-20 07:23:55.780777] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:51.903 [2024-11-20 07:23:55.780873] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:51.903 [2024-11-20 07:23:55.780895] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:51.903 [2024-11-20 07:23:55.780944] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:51.903 00:13:51.903 [2024-11-20 07:23:55.780984] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:54.440 00:13:54.440 real 0m3.982s 00:13:54.440 user 0m3.439s 00:13:54.440 sys 0m0.416s 00:13:54.440 07:23:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.440 07:23:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:54.440 ************************************ 00:13:54.440 END TEST bdev_hello_world 00:13:54.440 ************************************ 00:13:54.699 07:23:58 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:54.699 07:23:58 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.699 07:23:58 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.699 07:23:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:54.699 ************************************ 00:13:54.699 START TEST bdev_bounds 00:13:54.699 ************************************ 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70358 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:54.699 Process bdevio pid: 70358 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70358' 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70358 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 70358 ']' 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.699 07:23:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:54.699 [2024-11-20 07:23:58.475267] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:13:54.699 [2024-11-20 07:23:58.475475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70358 ] 00:13:54.959 [2024-11-20 07:23:58.658357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.959 [2024-11-20 07:23:58.808811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.959 [2024-11-20 07:23:58.808916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.959 [2024-11-20 07:23:58.808971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.527 [2024-11-20 07:23:59.299288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:55.527 [2024-11-20 07:23:59.299374] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:55.527 [2024-11-20 07:23:59.307239] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:55.527 [2024-11-20 07:23:59.307308] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:55.527 [2024-11-20 07:23:59.315189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:55.527 [2024-11-20 07:23:59.315251] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:55.527 [2024-11-20 07:23:59.315267] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:55.787 [2024-11-20 07:23:59.546482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:55.787 [2024-11-20 07:23:59.546571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.787 [2024-11-20 07:23:59.546592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:13:55.787 [2024-11-20 07:23:59.546603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.787 [2024-11-20 07:23:59.549261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.787 [2024-11-20 07:23:59.549313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:56.047 07:23:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.047 07:23:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:56.047 07:23:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:56.306 I/O targets: 00:13:56.306 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:56.306 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:56.306 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:56.306 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:56.306 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:56.306 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:56.306 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:56.306 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:56.306 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:56.306 00:13:56.306 00:13:56.306 CUnit - A unit testing framework for C - Version 2.1-3 00:13:56.306 http://cunit.sourceforge.net/ 00:13:56.306 00:13:56.306 00:13:56.306 Suite: bdevio tests on: AIO0 00:13:56.306 Test: blockdev write read block ...passed 00:13:56.306 Test: blockdev write zeroes read block ...passed 00:13:56.306 Test: blockdev write zeroes read no split ...passed 00:13:56.306 Test: blockdev write zeroes read split ...passed 00:13:56.306 Test: blockdev write zeroes read split partial ...passed 00:13:56.306 Test: blockdev reset ...passed 00:13:56.306 Test: blockdev write read 8 blocks ...passed 00:13:56.306 Test: blockdev write read size > 128k ...passed 00:13:56.306 Test: blockdev write read invalid size ...passed 00:13:56.306 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.306 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.306 Test: blockdev write read max offset ...passed 00:13:56.306 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.306 Test: blockdev writev readv 8 blocks ...passed 00:13:56.306 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.306 Test: blockdev writev readv block ...passed 00:13:56.306 Test: blockdev writev readv size > 128k ...passed 00:13:56.306 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.306 Test: blockdev comparev and writev ...passed 00:13:56.306 Test: blockdev nvme passthru rw ...passed 00:13:56.306 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.306 Test: blockdev nvme admin passthru ...passed 00:13:56.306 Test: blockdev copy ...passed 00:13:56.306 Suite: bdevio tests on: raid1 00:13:56.306 Test: blockdev write read block ...passed 00:13:56.306 Test: blockdev write zeroes read block ...passed 00:13:56.306 Test: blockdev write zeroes read no split ...passed 00:13:56.306 Test: blockdev write zeroes read split ...passed 00:13:56.566 Test: blockdev write zeroes read split partial ...passed 00:13:56.566 Test: blockdev reset ...passed 00:13:56.566 Test: blockdev write read 8 blocks ...passed 00:13:56.566 Test: blockdev write read size > 128k ...passed 00:13:56.566 Test: blockdev write read invalid size ...passed 00:13:56.566 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.566 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.566 Test: blockdev write read max offset ...passed 00:13:56.566 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.566 Test: blockdev writev readv 8 blocks ...passed 00:13:56.566 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.566 Test: blockdev writev readv block ...passed 00:13:56.566 Test: blockdev writev readv size > 128k ...passed 00:13:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.567 Test: blockdev comparev and writev ...passed 00:13:56.567 Test: blockdev nvme passthru rw ...passed 00:13:56.567 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.567 Test: blockdev nvme admin passthru ...passed 00:13:56.567 Test: blockdev copy ...passed 00:13:56.567 Suite: bdevio tests on: concat0 00:13:56.567 Test: blockdev write read block ...passed 00:13:56.567 Test: blockdev write zeroes read block ...passed 00:13:56.567 Test: blockdev write zeroes read no split ...passed 00:13:56.567 Test: blockdev write zeroes read split ...passed 00:13:56.567 Test: blockdev write zeroes read split partial ...passed 00:13:56.567 Test: blockdev reset ...passed 00:13:56.567 Test: blockdev write read 8 blocks ...passed 00:13:56.567 Test: blockdev write read size > 128k ...passed 00:13:56.567 Test: blockdev write read invalid size ...passed 00:13:56.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.567 Test: blockdev write read max offset ...passed 00:13:56.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.567 Test: blockdev writev readv 8 blocks ...passed 00:13:56.567 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.567 Test: blockdev writev readv block ...passed 00:13:56.567 Test: blockdev writev readv size > 128k ...passed 00:13:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.567 Test: blockdev comparev and writev ...passed 00:13:56.567 Test: blockdev nvme passthru rw ...passed 00:13:56.567 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.567 Test: blockdev nvme admin passthru ...passed 00:13:56.567 Test: blockdev copy ...passed 00:13:56.567 Suite: bdevio tests on: raid0 00:13:56.567 Test: blockdev write read block ...passed 00:13:56.567 Test: blockdev write zeroes read block ...passed 00:13:56.567 Test: blockdev write zeroes read no split ...passed 00:13:56.567 Test: blockdev write zeroes read split ...passed 00:13:56.567 Test: blockdev write zeroes read split partial ...passed 00:13:56.567 Test: blockdev reset ...passed 00:13:56.567 Test: blockdev write read 8 blocks ...passed 00:13:56.567 Test: blockdev write read size > 128k ...passed 00:13:56.567 Test: blockdev write read invalid size ...passed 00:13:56.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.567 Test: blockdev write read max offset ...passed 00:13:56.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.567 Test: blockdev writev readv 8 blocks ...passed 00:13:56.567 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.567 Test: blockdev writev readv block ...passed 00:13:56.567 Test: blockdev writev readv size > 128k ...passed 00:13:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.567 Test: blockdev comparev and writev ...passed 00:13:56.567 Test: blockdev nvme passthru rw ...passed 00:13:56.567 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.567 Test: blockdev nvme admin passthru ...passed 00:13:56.567 Test: blockdev copy ...passed 00:13:56.567 Suite: bdevio tests on: TestPT 00:13:56.567 Test: blockdev write read block ...passed 00:13:56.567 Test: blockdev write zeroes read block ...passed 00:13:56.567 Test: blockdev write zeroes read no split ...passed 00:13:56.567 Test: blockdev write zeroes read split ...passed 00:13:56.827 Test: blockdev write zeroes read split partial ...passed 00:13:56.827 Test: blockdev reset ...passed 00:13:56.827 Test: blockdev write read 8 blocks ...passed 00:13:56.827 Test: blockdev write read size > 128k ...passed 00:13:56.827 Test: blockdev write read invalid size ...passed 00:13:56.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.827 Test: blockdev write read max offset ...passed 00:13:56.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.827 Test: blockdev writev readv 8 blocks ...passed 00:13:56.827 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.827 Test: blockdev writev readv block ...passed 00:13:56.827 Test: blockdev writev readv size > 128k ...passed 00:13:56.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.827 Test: blockdev comparev and writev ...passed 00:13:56.827 Test: blockdev nvme passthru rw ...passed 00:13:56.827 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.827 Test: blockdev nvme admin passthru ...passed 00:13:56.827 Test: blockdev copy ...passed 00:13:56.827 Suite: bdevio tests on: Malloc2p7 00:13:56.827 Test: blockdev write read block ...passed 00:13:56.827 Test: blockdev write zeroes read block ...passed 00:13:56.827 Test: blockdev write zeroes read no split ...passed 00:13:56.827 Test: blockdev write zeroes read split ...passed 00:13:56.827 Test: blockdev write zeroes read split partial ...passed 00:13:56.827 Test: blockdev reset ...passed 00:13:56.827 Test: blockdev write read 8 blocks ...passed 00:13:56.827 Test: blockdev write read size > 128k ...passed 00:13:56.827 Test: blockdev write read invalid size ...passed 00:13:56.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.827 Test: blockdev write read max offset ...passed 00:13:56.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.827 Test: blockdev writev readv 8 blocks ...passed 00:13:56.827 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.827 Test: blockdev writev readv block ...passed 00:13:56.828 Test: blockdev writev readv size > 128k ...passed 00:13:56.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.828 Test: blockdev comparev and writev ...passed 00:13:56.828 Test: blockdev nvme passthru rw ...passed 00:13:56.828 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.828 Test: blockdev nvme admin passthru ...passed 00:13:56.828 Test: blockdev copy ...passed 00:13:56.828 Suite: bdevio tests on: Malloc2p6 00:13:56.828 Test: blockdev write read block ...passed 00:13:56.828 Test: blockdev write zeroes read block ...passed 00:13:56.828 Test: blockdev write zeroes read no split ...passed 00:13:56.828 Test: blockdev write zeroes read split ...passed 00:13:56.828 Test: blockdev write zeroes read split partial ...passed 00:13:56.828 Test: blockdev reset ...passed 00:13:56.828 Test: blockdev write read 8 blocks ...passed 00:13:56.828 Test: blockdev write read size > 128k ...passed 00:13:56.828 Test: blockdev write read invalid size ...passed 00:13:56.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.828 Test: blockdev write read max offset ...passed 00:13:56.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.828 Test: blockdev writev readv 8 blocks ...passed 00:13:56.828 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.828 Test: blockdev writev readv block ...passed 00:13:56.828 Test: blockdev writev readv size > 128k ...passed 00:13:56.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.828 Test: blockdev comparev and writev ...passed 00:13:56.828 Test: blockdev nvme passthru rw ...passed 00:13:56.828 Test: blockdev nvme passthru vendor specific ...passed 00:13:56.828 Test: blockdev nvme admin passthru ...passed 00:13:56.828 Test: blockdev copy ...passed 00:13:56.828 Suite: bdevio tests on: Malloc2p5 00:13:56.828 Test: blockdev write read block ...passed 00:13:56.828 Test: blockdev write zeroes read block ...passed 00:13:56.828 Test: blockdev write zeroes read no split ...passed 00:13:56.828 Test: blockdev write zeroes read split ...passed 00:13:57.087 Test: blockdev write zeroes read split partial ...passed 00:13:57.087 Test: blockdev reset ...passed 00:13:57.087 Test: blockdev write read 8 blocks ...passed 00:13:57.087 Test: blockdev write read size > 128k ...passed 00:13:57.087 Test: blockdev write read invalid size ...passed 00:13:57.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.087 Test: blockdev write read max offset ...passed 00:13:57.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.087 Test: blockdev writev readv 8 blocks ...passed 00:13:57.087 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.087 Test: blockdev writev readv block ...passed 00:13:57.087 Test: blockdev writev readv size > 128k ...passed 00:13:57.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.087 Test: blockdev comparev and writev ...passed 00:13:57.087 Test: blockdev nvme passthru rw ...passed 00:13:57.087 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.087 Test: blockdev nvme admin passthru ...passed 00:13:57.087 Test: blockdev copy ...passed 00:13:57.087 Suite: bdevio tests on: Malloc2p4 00:13:57.087 Test: blockdev write read block ...passed 00:13:57.087 Test: blockdev write zeroes read block ...passed 00:13:57.087 Test: blockdev write zeroes read no split ...passed 00:13:57.087 Test: blockdev write zeroes read split ...passed 00:13:57.087 Test: blockdev write zeroes read split partial ...passed 00:13:57.087 Test: blockdev reset ...passed 00:13:57.087 Test: blockdev write read 8 blocks ...passed 00:13:57.087 Test: blockdev write read size > 128k ...passed 00:13:57.087 Test: blockdev write read invalid size ...passed 00:13:57.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.087 Test: blockdev write read max offset ...passed 00:13:57.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.087 Test: blockdev writev readv 8 blocks ...passed 00:13:57.087 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.087 Test: blockdev writev readv block ...passed 00:13:57.087 Test: blockdev writev readv size > 128k ...passed 00:13:57.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.088 Test: blockdev comparev and writev ...passed 00:13:57.088 Test: blockdev nvme passthru rw ...passed 00:13:57.088 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.088 Test: blockdev nvme admin passthru ...passed 00:13:57.088 Test: blockdev copy ...passed 00:13:57.088 Suite: bdevio tests on: Malloc2p3 00:13:57.088 Test: blockdev write read block ...passed 00:13:57.088 Test: blockdev write zeroes read block ...passed 00:13:57.088 Test: blockdev write zeroes read no split ...passed 00:13:57.088 Test: blockdev write zeroes read split ...passed 00:13:57.088 Test: blockdev write zeroes read split partial ...passed 00:13:57.088 Test: blockdev reset ...passed 00:13:57.088 Test: blockdev write read 8 blocks ...passed 00:13:57.088 Test: blockdev write read size > 128k ...passed 00:13:57.088 Test: blockdev write read invalid size ...passed 00:13:57.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.088 Test: blockdev write read max offset ...passed 00:13:57.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.088 Test: blockdev writev readv 8 blocks ...passed 00:13:57.088 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.088 Test: blockdev writev readv block ...passed 00:13:57.088 Test: blockdev writev readv size > 128k ...passed 00:13:57.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.088 Test: blockdev comparev and writev ...passed 00:13:57.088 Test: blockdev nvme passthru rw ...passed 00:13:57.088 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.088 Test: blockdev nvme admin passthru ...passed 00:13:57.088 Test: blockdev copy ...passed 00:13:57.088 Suite: bdevio tests on: Malloc2p2 00:13:57.088 Test: blockdev write read block ...passed 00:13:57.088 Test: blockdev write zeroes read block ...passed 00:13:57.088 Test: blockdev write zeroes read no split ...passed 00:13:57.088 Test: blockdev write zeroes read split ...passed 00:13:57.350 Test: blockdev write zeroes read split partial ...passed 00:13:57.350 Test: blockdev reset ...passed 00:13:57.351 Test: blockdev write read 8 blocks ...passed 00:13:57.351 Test: blockdev write read size > 128k ...passed 00:13:57.351 Test: blockdev write read invalid size ...passed 00:13:57.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.351 Test: blockdev write read max offset ...passed 00:13:57.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.351 Test: blockdev writev readv 8 blocks ...passed 00:13:57.351 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.351 Test: blockdev writev readv block ...passed 00:13:57.351 Test: blockdev writev readv size > 128k ...passed 00:13:57.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.351 Test: blockdev comparev and writev ...passed 00:13:57.351 Test: blockdev nvme passthru rw ...passed 00:13:57.351 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.351 Test: blockdev nvme admin passthru ...passed 00:13:57.351 Test: blockdev copy ...passed 00:13:57.351 Suite: bdevio tests on: Malloc2p1 00:13:57.351 Test: blockdev write read block ...passed 00:13:57.351 Test: blockdev write zeroes read block ...passed 00:13:57.351 Test: blockdev write zeroes read no split ...passed 00:13:57.351 Test: blockdev write zeroes read split ...passed 00:13:57.351 Test: blockdev write zeroes read split partial ...passed 00:13:57.351 Test: blockdev reset ...passed 00:13:57.351 Test: blockdev write read 8 blocks ...passed 00:13:57.351 Test: blockdev write read size > 128k ...passed 00:13:57.351 Test: blockdev write read invalid size ...passed 00:13:57.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.351 Test: blockdev write read max offset ...passed 00:13:57.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.351 Test: blockdev writev readv 8 blocks ...passed 00:13:57.351 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.351 Test: blockdev writev readv block ...passed 00:13:57.351 Test: blockdev writev readv size > 128k ...passed 00:13:57.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.351 Test: blockdev comparev and writev ...passed 00:13:57.351 Test: blockdev nvme passthru rw ...passed 00:13:57.351 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.351 Test: blockdev nvme admin passthru ...passed 00:13:57.351 Test: blockdev copy ...passed 00:13:57.351 Suite: bdevio tests on: Malloc2p0 00:13:57.351 Test: blockdev write read block ...passed 00:13:57.351 Test: blockdev write zeroes read block ...passed 00:13:57.351 Test: blockdev write zeroes read no split ...passed 00:13:57.351 Test: blockdev write zeroes read split ...passed 00:13:57.351 Test: blockdev write zeroes read split partial ...passed 00:13:57.351 Test: blockdev reset ...passed 00:13:57.351 Test: blockdev write read 8 blocks ...passed 00:13:57.351 Test: blockdev write read size > 128k ...passed 00:13:57.351 Test: blockdev write read invalid size ...passed 00:13:57.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.351 Test: blockdev write read max offset ...passed 00:13:57.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.351 Test: blockdev writev readv 8 blocks ...passed 00:13:57.351 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.351 Test: blockdev writev readv block ...passed 00:13:57.351 Test: blockdev writev readv size > 128k ...passed 00:13:57.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.351 Test: blockdev comparev and writev ...passed 00:13:57.351 Test: blockdev nvme passthru rw ...passed 00:13:57.351 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.351 Test: blockdev nvme admin passthru ...passed 00:13:57.351 Test: blockdev copy ...passed 00:13:57.351 Suite: bdevio tests on: Malloc1p1 00:13:57.351 Test: blockdev write read block ...passed 00:13:57.351 Test: blockdev write zeroes read block ...passed 00:13:57.351 Test: blockdev write zeroes read no split ...passed 00:13:57.351 Test: blockdev write zeroes read split ...passed 00:13:57.609 Test: blockdev write zeroes read split partial ...passed 00:13:57.609 Test: blockdev reset ...passed 00:13:57.609 Test: blockdev write read 8 blocks ...passed 00:13:57.609 Test: blockdev write read size > 128k ...passed 00:13:57.609 Test: blockdev write read invalid size ...passed 00:13:57.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.609 Test: blockdev write read max offset ...passed 00:13:57.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.609 Test: blockdev writev readv 8 blocks ...passed 00:13:57.609 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.609 Test: blockdev writev readv block ...passed 00:13:57.609 Test: blockdev writev readv size > 128k ...passed 00:13:57.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.609 Test: blockdev comparev and writev ...passed 00:13:57.609 Test: blockdev nvme passthru rw ...passed 00:13:57.609 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.609 Test: blockdev nvme admin passthru ...passed 00:13:57.609 Test: blockdev copy ...passed 00:13:57.609 Suite: bdevio tests on: Malloc1p0 00:13:57.609 Test: blockdev write read block ...passed 00:13:57.609 Test: blockdev write zeroes read block ...passed 00:13:57.609 Test: blockdev write zeroes read no split ...passed 00:13:57.609 Test: blockdev write zeroes read split ...passed 00:13:57.609 Test: blockdev write zeroes read split partial ...passed 00:13:57.609 Test: blockdev reset ...passed 00:13:57.609 Test: blockdev write read 8 blocks ...passed 00:13:57.609 Test: blockdev write read size > 128k ...passed 00:13:57.609 Test: blockdev write read invalid size ...passed 00:13:57.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.609 Test: blockdev write read max offset ...passed 00:13:57.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.609 Test: blockdev writev readv 8 blocks ...passed 00:13:57.609 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.609 Test: blockdev writev readv block ...passed 00:13:57.609 Test: blockdev writev readv size > 128k ...passed 00:13:57.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.609 Test: blockdev comparev and writev ...passed 00:13:57.609 Test: blockdev nvme passthru rw ...passed 00:13:57.609 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.609 Test: blockdev nvme admin passthru ...passed 00:13:57.609 Test: blockdev copy ...passed 00:13:57.609 Suite: bdevio tests on: Malloc0 00:13:57.609 Test: blockdev write read block ...passed 00:13:57.609 Test: blockdev write zeroes read block ...passed 00:13:57.609 Test: blockdev write zeroes read no split ...passed 00:13:57.609 Test: blockdev write zeroes read split ...passed 00:13:57.609 Test: blockdev write zeroes read split partial ...passed 00:13:57.609 Test: blockdev reset ...passed 00:13:57.609 Test: blockdev write read 8 blocks ...passed 00:13:57.609 Test: blockdev write read size > 128k ...passed 00:13:57.609 Test: blockdev write read invalid size ...passed 00:13:57.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.609 Test: blockdev write read max offset ...passed 00:13:57.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.609 Test: blockdev writev readv 8 blocks ...passed 00:13:57.609 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.609 Test: blockdev writev readv block ...passed 00:13:57.609 Test: blockdev writev readv size > 128k ...passed 00:13:57.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.609 Test: blockdev comparev and writev ...passed 00:13:57.609 Test: blockdev nvme passthru rw ...passed 00:13:57.609 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.609 Test: blockdev nvme admin passthru ...passed 00:13:57.609 Test: blockdev copy ...passed 00:13:57.609 00:13:57.609 Run Summary: Type Total Ran Passed Failed Inactive 00:13:57.609 suites 16 16 n/a 0 0 00:13:57.609 tests 368 368 368 0 0 00:13:57.609 asserts 2224 2224 2224 0 n/a 00:13:57.609 00:13:57.609 Elapsed time = 4.004 seconds 00:13:57.609 0 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70358 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 70358 ']' 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 70358 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70358 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.609 killing process with pid 70358 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70358' 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # kill 70358 00:13:57.609 07:24:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@978 -- # wait 70358 00:14:00.231 07:24:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:00.232 00:14:00.232 real 0m5.422s 00:14:00.232 user 0m14.481s 00:14:00.232 sys 0m0.628s 00:14:00.232 07:24:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.232 07:24:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:00.232 ************************************ 00:14:00.232 END TEST bdev_bounds 00:14:00.232 ************************************ 00:14:00.232 07:24:03 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:14:00.232 07:24:03 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:00.232 07:24:03 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.232 07:24:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:00.232 ************************************ 00:14:00.232 START TEST bdev_nbd 00:14:00.232 ************************************ 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70443 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70443 /var/tmp/spdk-nbd.sock 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 70443 ']' 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.232 07:24:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:00.232 [2024-11-20 07:24:03.964926] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:14:00.232 [2024-11-20 07:24:03.965063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.232 [2024-11-20 07:24:04.145058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.492 [2024-11-20 07:24:04.264590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.751 [2024-11-20 07:24:04.657749] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:00.751 [2024-11-20 07:24:04.657827] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:00.751 [2024-11-20 07:24:04.665668] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:00.751 [2024-11-20 07:24:04.665736] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:00.751 [2024-11-20 07:24:04.673671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:00.751 [2024-11-20 07:24:04.673737] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:00.751 [2024-11-20 07:24:04.673749] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:01.010 [2024-11-20 07:24:04.865981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:01.010 [2024-11-20 07:24:04.866077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.010 [2024-11-20 07:24:04.866109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:14:01.010 [2024-11-20 07:24:04.866123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.010 [2024-11-20 07:24:04.868913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.010 [2024-11-20 07:24:04.868958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.589 1+0 records in 00:14:01.589 1+0 records out 00:14:01.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337964 s, 12.1 MB/s 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:01.589 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.853 1+0 records in 00:14:01.853 1+0 records out 00:14:01.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033177 s, 12.3 MB/s 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:01.853 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:02.112 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.113 1+0 records in 00:14:02.113 1+0 records out 00:14:02.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292095 s, 14.0 MB/s 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:02.113 07:24:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.372 1+0 records in 00:14:02.372 1+0 records out 00:14:02.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442438 s, 9.3 MB/s 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:02.372 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.632 1+0 records in 00:14:02.632 1+0 records out 00:14:02.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438163 s, 9.3 MB/s 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:02.632 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.892 1+0 records in 00:14:02.892 1+0 records out 00:14:02.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450241 s, 9.1 MB/s 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:02.892 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.152 1+0 records in 00:14:03.152 1+0 records out 00:14:03.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448908 s, 9.1 MB/s 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:03.152 07:24:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.411 1+0 records in 00:14:03.411 1+0 records out 00:14:03.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517546 s, 7.9 MB/s 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:03.411 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.671 1+0 records in 00:14:03.671 1+0 records out 00:14:03.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471097 s, 8.7 MB/s 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:03.671 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.931 1+0 records in 00:14:03.931 1+0 records out 00:14:03.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489949 s, 8.4 MB/s 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:03.931 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.191 1+0 records in 00:14:04.191 1+0 records out 00:14:04.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053419 s, 7.7 MB/s 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:04.191 07:24:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.450 1+0 records in 00:14:04.450 1+0 records out 00:14:04.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537309 s, 7.6 MB/s 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:04.450 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.709 1+0 records in 00:14:04.709 1+0 records out 00:14:04.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563568 s, 7.3 MB/s 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:14:04.709 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.969 1+0 records in 00:14:04.969 1+0 records out 00:14:04.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659697 s, 6.2 MB/s 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.969 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.969 1+0 records in 00:14:04.969 1+0 records out 00:14:04.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511583 s, 8.0 MB/s 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:05.229 07:24:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.229 1+0 records in 00:14:05.229 1+0 records out 00:14:05.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907401 s, 4.5 MB/s 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:05.229 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd0", 00:14:05.487 "bdev_name": "Malloc0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd1", 00:14:05.487 "bdev_name": "Malloc1p0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd2", 00:14:05.487 "bdev_name": "Malloc1p1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd3", 00:14:05.487 "bdev_name": "Malloc2p0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd4", 00:14:05.487 "bdev_name": "Malloc2p1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd5", 00:14:05.487 "bdev_name": "Malloc2p2" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd6", 00:14:05.487 "bdev_name": "Malloc2p3" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd7", 00:14:05.487 "bdev_name": "Malloc2p4" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd8", 00:14:05.487 "bdev_name": "Malloc2p5" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd9", 00:14:05.487 "bdev_name": "Malloc2p6" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd10", 00:14:05.487 "bdev_name": "Malloc2p7" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd11", 00:14:05.487 "bdev_name": "TestPT" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd12", 00:14:05.487 "bdev_name": "raid0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd13", 00:14:05.487 "bdev_name": "concat0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd14", 00:14:05.487 "bdev_name": "raid1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd15", 00:14:05.487 "bdev_name": "AIO0" 00:14:05.487 } 00:14:05.487 ]' 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd0", 00:14:05.487 "bdev_name": "Malloc0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd1", 00:14:05.487 "bdev_name": "Malloc1p0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd2", 00:14:05.487 "bdev_name": "Malloc1p1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd3", 00:14:05.487 "bdev_name": "Malloc2p0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd4", 00:14:05.487 "bdev_name": "Malloc2p1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd5", 00:14:05.487 "bdev_name": "Malloc2p2" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd6", 00:14:05.487 "bdev_name": "Malloc2p3" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd7", 00:14:05.487 "bdev_name": "Malloc2p4" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd8", 00:14:05.487 "bdev_name": "Malloc2p5" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd9", 00:14:05.487 "bdev_name": "Malloc2p6" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd10", 00:14:05.487 "bdev_name": "Malloc2p7" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd11", 00:14:05.487 "bdev_name": "TestPT" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd12", 00:14:05.487 "bdev_name": "raid0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd13", 00:14:05.487 "bdev_name": "concat0" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd14", 00:14:05.487 "bdev_name": "raid1" 00:14:05.487 }, 00:14:05.487 { 00:14:05.487 "nbd_device": "/dev/nbd15", 00:14:05.487 "bdev_name": "AIO0" 00:14:05.487 } 00:14:05.487 ]' 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.487 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.746 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.007 07:24:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.268 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.529 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.789 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.049 07:24:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.309 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.568 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.827 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.086 07:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.346 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.606 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.866 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.126 07:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:09.386 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:09.646 /dev/nbd0 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.646 1+0 records in 00:14:09.646 1+0 records out 00:14:09.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429106 s, 9.5 MB/s 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:09.646 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:14:09.906 /dev/nbd1 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.906 1+0 records in 00:14:09.906 1+0 records out 00:14:09.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437282 s, 9.4 MB/s 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.906 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:09.907 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.907 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:09.907 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:14:10.166 /dev/nbd10 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.166 1+0 records in 00:14:10.166 1+0 records out 00:14:10.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446771 s, 9.2 MB/s 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.166 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.167 07:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:10.167 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.167 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:10.167 07:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:14:10.426 /dev/nbd11 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.426 1+0 records in 00:14:10.426 1+0 records out 00:14:10.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388416 s, 10.5 MB/s 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:10.426 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:14:10.686 /dev/nbd12 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.686 1+0 records in 00:14:10.686 1+0 records out 00:14:10.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462 s, 8.9 MB/s 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:10.686 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:14:10.946 /dev/nbd13 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.946 1+0 records in 00:14:10.946 1+0 records out 00:14:10.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266038 s, 15.4 MB/s 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:10.946 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:14:11.206 /dev/nbd14 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.206 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.207 1+0 records in 00:14:11.207 1+0 records out 00:14:11.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436552 s, 9.4 MB/s 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:11.207 07:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:14:11.467 /dev/nbd15 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.467 1+0 records in 00:14:11.467 1+0 records out 00:14:11.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451366 s, 9.1 MB/s 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:11.467 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:14:11.727 /dev/nbd2 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.727 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.727 1+0 records in 00:14:11.727 1+0 records out 00:14:11.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507987 s, 8.1 MB/s 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:11.728 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:14:12.018 /dev/nbd3 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.018 1+0 records in 00:14:12.018 1+0 records out 00:14:12.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038462 s, 10.6 MB/s 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:12.018 07:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:14:12.279 /dev/nbd4 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.279 1+0 records in 00:14:12.279 1+0 records out 00:14:12.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379917 s, 10.8 MB/s 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:12.279 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:14:12.540 /dev/nbd5 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.540 1+0 records in 00:14:12.540 1+0 records out 00:14:12.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466297 s, 8.8 MB/s 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:12.540 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:14:12.801 /dev/nbd6 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.801 1+0 records in 00:14:12.801 1+0 records out 00:14:12.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810709 s, 5.1 MB/s 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:12.801 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:14:13.062 /dev/nbd7 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.062 1+0 records in 00:14:13.062 1+0 records out 00:14:13.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672783 s, 6.1 MB/s 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:13.062 07:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:14:13.322 /dev/nbd8 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.322 1+0 records in 00:14:13.322 1+0 records out 00:14:13.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589424 s, 6.9 MB/s 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:13.322 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:14:13.582 /dev/nbd9 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.582 1+0 records in 00:14:13.582 1+0 records out 00:14:13.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927886 s, 4.4 MB/s 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:13.582 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.583 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd0", 00:14:13.859 "bdev_name": "Malloc0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd1", 00:14:13.859 "bdev_name": "Malloc1p0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd10", 00:14:13.859 "bdev_name": "Malloc1p1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd11", 00:14:13.859 "bdev_name": "Malloc2p0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd12", 00:14:13.859 "bdev_name": "Malloc2p1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd13", 00:14:13.859 "bdev_name": "Malloc2p2" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd14", 00:14:13.859 "bdev_name": "Malloc2p3" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd15", 00:14:13.859 "bdev_name": "Malloc2p4" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd2", 00:14:13.859 "bdev_name": "Malloc2p5" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd3", 00:14:13.859 "bdev_name": "Malloc2p6" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd4", 00:14:13.859 "bdev_name": "Malloc2p7" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd5", 00:14:13.859 "bdev_name": "TestPT" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd6", 00:14:13.859 "bdev_name": "raid0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd7", 00:14:13.859 "bdev_name": "concat0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd8", 00:14:13.859 "bdev_name": "raid1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd9", 00:14:13.859 "bdev_name": "AIO0" 00:14:13.859 } 00:14:13.859 ]' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd0", 00:14:13.859 "bdev_name": "Malloc0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd1", 00:14:13.859 "bdev_name": "Malloc1p0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd10", 00:14:13.859 "bdev_name": "Malloc1p1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd11", 00:14:13.859 "bdev_name": "Malloc2p0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd12", 00:14:13.859 "bdev_name": "Malloc2p1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd13", 00:14:13.859 "bdev_name": "Malloc2p2" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd14", 00:14:13.859 "bdev_name": "Malloc2p3" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd15", 00:14:13.859 "bdev_name": "Malloc2p4" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd2", 00:14:13.859 "bdev_name": "Malloc2p5" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd3", 00:14:13.859 "bdev_name": "Malloc2p6" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd4", 00:14:13.859 "bdev_name": "Malloc2p7" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd5", 00:14:13.859 "bdev_name": "TestPT" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd6", 00:14:13.859 "bdev_name": "raid0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd7", 00:14:13.859 "bdev_name": "concat0" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd8", 00:14:13.859 "bdev_name": "raid1" 00:14:13.859 }, 00:14:13.859 { 00:14:13.859 "nbd_device": "/dev/nbd9", 00:14:13.859 "bdev_name": "AIO0" 00:14:13.859 } 00:14:13.859 ]' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:13.859 /dev/nbd1 00:14:13.859 /dev/nbd10 00:14:13.859 /dev/nbd11 00:14:13.859 /dev/nbd12 00:14:13.859 /dev/nbd13 00:14:13.859 /dev/nbd14 00:14:13.859 /dev/nbd15 00:14:13.859 /dev/nbd2 00:14:13.859 /dev/nbd3 00:14:13.859 /dev/nbd4 00:14:13.859 /dev/nbd5 00:14:13.859 /dev/nbd6 00:14:13.859 /dev/nbd7 00:14:13.859 /dev/nbd8 00:14:13.859 /dev/nbd9' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:13.859 /dev/nbd1 00:14:13.859 /dev/nbd10 00:14:13.859 /dev/nbd11 00:14:13.859 /dev/nbd12 00:14:13.859 /dev/nbd13 00:14:13.859 /dev/nbd14 00:14:13.859 /dev/nbd15 00:14:13.859 /dev/nbd2 00:14:13.859 /dev/nbd3 00:14:13.859 /dev/nbd4 00:14:13.859 /dev/nbd5 00:14:13.859 /dev/nbd6 00:14:13.859 /dev/nbd7 00:14:13.859 /dev/nbd8 00:14:13.859 /dev/nbd9' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:13.859 256+0 records in 00:14:13.859 256+0 records out 00:14:13.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00854253 s, 123 MB/s 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:13.859 256+0 records in 00:14:13.859 256+0 records out 00:14:13.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0922989 s, 11.4 MB/s 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:13.859 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:14.119 256+0 records in 00:14:14.119 256+0 records out 00:14:14.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0900476 s, 11.6 MB/s 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:14.119 256+0 records in 00:14:14.119 256+0 records out 00:14:14.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0835154 s, 12.6 MB/s 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:14.119 256+0 records in 00:14:14.119 256+0 records out 00:14:14.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0860637 s, 12.2 MB/s 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.119 07:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:14.379 256+0 records in 00:14:14.379 256+0 records out 00:14:14.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.087979 s, 11.9 MB/s 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:14.379 256+0 records in 00:14:14.379 256+0 records out 00:14:14.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0880188 s, 11.9 MB/s 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:14.379 256+0 records in 00:14:14.379 256+0 records out 00:14:14.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0862892 s, 12.2 MB/s 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.379 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:14:14.639 256+0 records in 00:14:14.639 256+0 records out 00:14:14.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0822415 s, 12.7 MB/s 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:14:14.639 256+0 records in 00:14:14.639 256+0 records out 00:14:14.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.089358 s, 11.7 MB/s 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:14:14.639 256+0 records in 00:14:14.639 256+0 records out 00:14:14.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0852671 s, 12.3 MB/s 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.639 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:14:14.899 256+0 records in 00:14:14.899 256+0 records out 00:14:14.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0836791 s, 12.5 MB/s 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:14:14.899 256+0 records in 00:14:14.899 256+0 records out 00:14:14.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781092 s, 13.4 MB/s 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:14:14.899 256+0 records in 00:14:14.899 256+0 records out 00:14:14.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0840084 s, 12.5 MB/s 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.899 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:14:15.157 256+0 records in 00:14:15.157 256+0 records out 00:14:15.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0803389 s, 13.1 MB/s 00:14:15.157 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.157 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:14:15.157 256+0 records in 00:14:15.157 256+0 records out 00:14:15.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786587 s, 13.3 MB/s 00:14:15.157 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.157 07:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:14:15.416 256+0 records in 00:14:15.416 256+0 records out 00:14:15.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136692 s, 7.7 MB/s 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.416 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.674 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.933 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.192 07:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.192 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.451 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.712 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.971 07:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.232 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.492 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.752 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.011 07:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.271 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.531 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.790 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.049 07:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:19.309 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:19.569 malloc_lvol_verify 00:14:19.569 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:19.872 e5e64c89-1d69-45b1-b249-bfb847a6aa53 00:14:19.872 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:19.872 54eb6633-2632-4219-bfa0-8021fd30fb2d 00:14:19.872 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:20.160 /dev/nbd0 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:20.160 mke2fs 1.47.0 (5-Feb-2023) 00:14:20.160 00:14:20.160 Filesystem too small for a journal 00:14:20.160 Discarding device blocks: 0/1024 done 00:14:20.160 Creating filesystem with 1024 4k blocks and 1024 inodes 00:14:20.160 00:14:20.160 Allocating group tables: 0/1 done 00:14:20.160 Writing inode tables: 0/1 done 00:14:20.160 Writing superblocks and filesystem accounting information: 0/1 done 00:14:20.160 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.160 07:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70443 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 70443 ']' 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 70443 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70443 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70443' 00:14:20.419 killing process with pid 70443 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # kill 70443 00:14:20.419 07:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@978 -- # wait 70443 00:14:22.956 07:24:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:22.956 00:14:22.956 real 0m22.890s 00:14:22.956 user 0m30.687s 00:14:22.956 sys 0m9.090s 00:14:22.956 07:24:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.956 ************************************ 00:14:22.956 END TEST bdev_nbd 00:14:22.956 ************************************ 00:14:22.956 07:24:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:22.956 07:24:26 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:22.956 07:24:26 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:14:22.956 07:24:26 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:14:22.956 07:24:26 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:22.956 07:24:26 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.956 07:24:26 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.956 07:24:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:22.956 ************************************ 00:14:22.956 START TEST bdev_fio 00:14:22.956 ************************************ 00:14:22.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:14:22.956 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.216 07:24:26 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:23.216 ************************************ 00:14:23.216 START TEST bdev_fio_rw_verify 00:14:23.216 ************************************ 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:23.216 07:24:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:23.476 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:23.476 fio-3.35 00:14:23.476 Starting 16 threads 00:14:38.397 00:14:38.397 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=71553: Wed Nov 20 07:24:40 2024 00:14:38.397 read: IOPS=69.5k, BW=272MiB/s (285MB/s)(2717MiB/10004msec) 00:14:38.397 slat (usec): min=2, max=12102, avg=47.37, stdev=267.37 00:14:38.397 clat (usec): min=10, max=12379, avg=351.88, stdev=728.03 00:14:38.397 lat (usec): min=27, max=12403, avg=399.25, stdev=774.51 00:14:38.397 clat percentiles (usec): 00:14:38.397 | 50.000th=[ 221], 99.000th=[ 4359], 99.900th=[ 7308], 99.990th=[ 9241], 00:14:38.397 | 99.999th=[11338] 00:14:38.397 write: IOPS=109k, BW=426MiB/s (447MB/s)(4212MiB/9884msec); 0 zone resets 00:14:38.397 slat (usec): min=7, max=18184, avg=68.73, stdev=326.09 00:14:38.397 clat (usec): min=9, max=15448, avg=424.61, stdev=820.15 00:14:38.397 lat (usec): min=41, max=18618, avg=493.34, stdev=879.84 00:14:38.397 clat percentiles (usec): 00:14:38.397 | 50.000th=[ 265], 99.000th=[ 4424], 99.900th=[ 7504], 99.990th=[11338], 00:14:38.397 | 99.999th=[15008] 00:14:38.397 bw ( KiB/s): min=268320, max=700099, per=98.08%, avg=428037.89, stdev=7780.08, samples=304 00:14:38.397 iops : min=67080, max=175024, avg=107009.16, stdev=1945.01, samples=304 00:14:38.398 lat (usec) : 10=0.01%, 20=0.01%, 50=0.40%, 100=7.20%, 250=43.90% 00:14:38.398 lat (usec) : 500=41.78%, 750=2.96%, 1000=0.16% 00:14:38.398 lat (msec) : 2=0.16%, 4=1.10%, 10=2.32%, 20=0.02% 00:14:38.398 cpu : usr=61.47%, sys=1.82%, ctx=273534, majf=0, minf=91958 00:14:38.398 IO depths : 1=10.9%, 2=24.2%, 4=52.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.398 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.398 issued rwts: total=695546,1078345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.398 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:38.398 00:14:38.398 Run status group 0 (all jobs): 00:14:38.398 READ: bw=272MiB/s (285MB/s), 272MiB/s-272MiB/s (285MB/s-285MB/s), io=2717MiB (2849MB), run=10004-10004msec 00:14:38.398 WRITE: bw=426MiB/s (447MB/s), 426MiB/s-426MiB/s (447MB/s-447MB/s), io=4212MiB (4417MB), run=9884-9884msec 00:14:39.781 ----------------------------------------------------- 00:14:39.781 Suppressions used: 00:14:39.781 count bytes template 00:14:39.781 16 140 /usr/src/fio/parse.c 00:14:39.781 8716 836736 /usr/src/fio/iolog.c 00:14:39.781 1 904 libcrypto.so 00:14:39.781 ----------------------------------------------------- 00:14:39.781 00:14:39.781 00:14:39.781 real 0m16.361s 00:14:39.781 user 1m52.485s 00:14:39.781 sys 0m3.743s 00:14:39.781 07:24:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.781 07:24:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:39.781 ************************************ 00:14:39.781 END TEST bdev_fio_rw_verify 00:14:39.781 ************************************ 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:14:39.781 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:39.783 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3aabe122-b104-4b5b-a388-0f2490b95399"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3aabe122-b104-4b5b-a388-0f2490b95399",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9ea473f3-895b-5836-9ec9-6d4d60e0ca46"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9ea473f3-895b-5836-9ec9-6d4d60e0ca46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd854861-e64c-5c8b-a652-41bed28b9b77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd854861-e64c-5c8b-a652-41bed28b9b77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "52886fca-d8ce-5303-9c2a-1cde970445ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52886fca-d8ce-5303-9c2a-1cde970445ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "411226be-cc31-5b22-b701-4b296299269f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "411226be-cc31-5b22-b701-4b296299269f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "66fefa64-1d62-5483-970c-63a822178b07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66fefa64-1d62-5483-970c-63a822178b07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c6d1fa1-1a7e-5339-828b-1363428a081b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c6d1fa1-1a7e-5339-828b-1363428a081b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "30f2a774-6596-5e45-8a56-201b5f6583af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30f2a774-6596-5e45-8a56-201b5f6583af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1aa5f263-14fd-5940-adc8-9874421e8e4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1aa5f263-14fd-5940-adc8-9874421e8e4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "85f1923f-8296-5445-8da5-26477e425ba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "85f1923f-8296-5445-8da5-26477e425ba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3197c940-9229-5fc0-9aae-17e522d4bcdc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3197c940-9229-5fc0-9aae-17e522d4bcdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2ed94ade-89da-442f-8f12-c5da833b6fa1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a252f256-4482-4d83-ab6f-363be57fad05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2f812059-8bcb-403a-af6f-9b03c3887a90",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6935b68c-805b-41e4-967f-05d9466dad63"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ad1855be-c83d-4373-9ef2-ee763c8ba7d3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "126eede7-79cf-4296-af37-b1d587b3ebb8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4f72518c-b64e-4741-9804-978e28f1fcf5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1ca2ff6c-d4c8-4533-9307-5e6b28c6f293",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "424ff6d6-b34f-4fb0-809f-91b23f480a39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "947e7821-a017-4762-b3d7-6e9e9d2821cd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "947e7821-a017-4762-b3d7-6e9e9d2821cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:39.783 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:14:39.783 Malloc1p0 00:14:39.783 Malloc1p1 00:14:39.783 Malloc2p0 00:14:39.783 Malloc2p1 00:14:39.783 Malloc2p2 00:14:39.783 Malloc2p3 00:14:39.783 Malloc2p4 00:14:39.783 Malloc2p5 00:14:39.783 Malloc2p6 00:14:39.783 Malloc2p7 00:14:39.783 TestPT 00:14:39.783 raid0 00:14:39.783 concat0 ]] 00:14:39.783 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3aabe122-b104-4b5b-a388-0f2490b95399"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3aabe122-b104-4b5b-a388-0f2490b95399",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9ea473f3-895b-5836-9ec9-6d4d60e0ca46"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9ea473f3-895b-5836-9ec9-6d4d60e0ca46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd854861-e64c-5c8b-a652-41bed28b9b77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd854861-e64c-5c8b-a652-41bed28b9b77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "52886fca-d8ce-5303-9c2a-1cde970445ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52886fca-d8ce-5303-9c2a-1cde970445ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "411226be-cc31-5b22-b701-4b296299269f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "411226be-cc31-5b22-b701-4b296299269f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c55a2a1c-3fb2-58e5-8650-f4eaa2db7c60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "66fefa64-1d62-5483-970c-63a822178b07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66fefa64-1d62-5483-970c-63a822178b07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c6d1fa1-1a7e-5339-828b-1363428a081b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c6d1fa1-1a7e-5339-828b-1363428a081b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "30f2a774-6596-5e45-8a56-201b5f6583af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30f2a774-6596-5e45-8a56-201b5f6583af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1aa5f263-14fd-5940-adc8-9874421e8e4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1aa5f263-14fd-5940-adc8-9874421e8e4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "85f1923f-8296-5445-8da5-26477e425ba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "85f1923f-8296-5445-8da5-26477e425ba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3197c940-9229-5fc0-9aae-17e522d4bcdc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3197c940-9229-5fc0-9aae-17e522d4bcdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2ed94ade-89da-442f-8f12-c5da833b6fa1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ed94ade-89da-442f-8f12-c5da833b6fa1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a252f256-4482-4d83-ab6f-363be57fad05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2f812059-8bcb-403a-af6f-9b03c3887a90",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6935b68c-805b-41e4-967f-05d9466dad63"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6935b68c-805b-41e4-967f-05d9466dad63",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ad1855be-c83d-4373-9ef2-ee763c8ba7d3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "126eede7-79cf-4296-af37-b1d587b3ebb8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4f72518c-b64e-4741-9804-978e28f1fcf5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4f72518c-b64e-4741-9804-978e28f1fcf5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1ca2ff6c-d4c8-4533-9307-5e6b28c6f293",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "424ff6d6-b34f-4fb0-809f-91b23f480a39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "947e7821-a017-4762-b3d7-6e9e9d2821cd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "947e7821-a017-4762-b3d7-6e9e9d2821cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.784 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.785 07:24:43 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:39.785 ************************************ 00:14:39.785 START TEST bdev_fio_trim 00:14:39.785 ************************************ 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # shift 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # break 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.785 07:24:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:40.044 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.044 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.044 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.045 fio-3.35 00:14:40.045 Starting 14 threads 00:14:52.371 00:14:52.371 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=71780: Wed Nov 20 07:24:55 2024 00:14:52.371 write: IOPS=106k, BW=415MiB/s (435MB/s)(4149MiB/10001msec); 0 zone resets 00:14:52.371 slat (usec): min=2, max=7090, avg=47.52, stdev=241.60 00:14:52.371 clat (usec): min=19, max=9539, avg=326.63, stdev=631.96 00:14:52.371 lat (usec): min=30, max=9573, avg=374.15, stdev=675.21 00:14:52.371 clat percentiles (usec): 00:14:52.371 | 50.000th=[ 225], 99.000th=[ 4228], 99.900th=[ 7177], 99.990th=[ 7373], 00:14:52.371 | 99.999th=[ 8225] 00:14:52.371 bw ( KiB/s): min=299664, max=559416, per=100.00%, avg=425504.00, stdev=6714.46, samples=266 00:14:52.371 iops : min=74916, max=139854, avg=106375.95, stdev=1678.62, samples=266 00:14:52.371 trim: IOPS=106k, BW=415MiB/s (435MB/s)(4149MiB/10001msec); 0 zone resets 00:14:52.371 slat (usec): min=3, max=9207, avg=32.06, stdev=197.42 00:14:52.371 clat (usec): min=4, max=9573, avg=370.79, stdev=672.67 00:14:52.371 lat (usec): min=12, max=9596, avg=402.84, stdev=700.06 00:14:52.372 clat percentiles (usec): 00:14:52.372 | 50.000th=[ 258], 99.000th=[ 4293], 99.900th=[ 7242], 99.990th=[ 7439], 00:14:52.372 | 99.999th=[ 8291] 00:14:52.372 bw ( KiB/s): min=299672, max=559408, per=100.00%, avg=425504.00, stdev=6714.43, samples=266 00:14:52.372 iops : min=74918, max=139852, avg=106375.84, stdev=1678.61, samples=266 00:14:52.372 lat (usec) : 10=0.01%, 20=0.01%, 50=0.33%, 100=3.20%, 250=49.70% 00:14:52.372 lat (usec) : 500=43.85%, 750=0.29%, 1000=0.03% 00:14:52.372 lat (msec) : 2=0.04%, 4=0.74%, 10=1.82% 00:14:52.372 cpu : usr=69.50%, sys=0.33%, ctx=158449, majf=0, minf=15649 00:14:52.372 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.372 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.372 issued rwts: total=0,1062079,1062080,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.372 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:52.372 00:14:52.372 Run status group 0 (all jobs): 00:14:52.372 WRITE: bw=415MiB/s (435MB/s), 415MiB/s-415MiB/s (435MB/s-435MB/s), io=4149MiB (4350MB), run=10001-10001msec 00:14:52.372 TRIM: bw=415MiB/s (435MB/s), 415MiB/s-415MiB/s (435MB/s-435MB/s), io=4149MiB (4350MB), run=10001-10001msec 00:14:54.273 ----------------------------------------------------- 00:14:54.273 Suppressions used: 00:14:54.273 count bytes template 00:14:54.273 14 129 /usr/src/fio/parse.c 00:14:54.273 1 904 libcrypto.so 00:14:54.273 ----------------------------------------------------- 00:14:54.273 00:14:54.273 00:14:54.273 real 0m14.388s 00:14:54.273 user 1m42.536s 00:14:54.273 sys 0m1.190s 00:14:54.273 07:24:57 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.273 07:24:57 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 ************************************ 00:14:54.273 END TEST bdev_fio_trim 00:14:54.273 ************************************ 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.273 /home/vagrant/spdk_repo/spdk 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:14:54.273 00:14:54.273 real 0m31.094s 00:14:54.273 user 3m35.142s 00:14:54.273 sys 0m5.133s 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.273 07:24:57 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 ************************************ 00:14:54.273 END TEST bdev_fio 00:14:54.273 ************************************ 00:14:54.273 07:24:57 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:54.273 07:24:57 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:54.273 07:24:57 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:54.273 07:24:57 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.273 07:24:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 ************************************ 00:14:54.273 START TEST bdev_verify 00:14:54.273 ************************************ 00:14:54.273 07:24:58 blockdev_general.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:54.273 [2024-11-20 07:24:58.072052] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:14:54.273 [2024-11-20 07:24:58.072171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71967 ] 00:14:54.534 [2024-11-20 07:24:58.234639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:54.534 [2024-11-20 07:24:58.365320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.534 [2024-11-20 07:24:58.365362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.102 [2024-11-20 07:24:58.837052] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:55.102 [2024-11-20 07:24:58.837117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:55.102 [2024-11-20 07:24:58.845003] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:55.102 [2024-11-20 07:24:58.845045] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:55.102 [2024-11-20 07:24:58.852975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:55.102 [2024-11-20 07:24:58.853012] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:55.102 [2024-11-20 07:24:58.853023] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:55.360 [2024-11-20 07:24:59.073082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:55.360 [2024-11-20 07:24:59.073146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.360 [2024-11-20 07:24:59.073167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:14:55.360 [2024-11-20 07:24:59.073176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.360 [2024-11-20 07:24:59.075391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.360 [2024-11-20 07:24:59.075441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:55.618 Running I/O for 5 seconds... 00:15:00.677 36736.00 IOPS, 143.50 MiB/s [2024-11-20T07:25:04.869Z] 52032.00 IOPS, 203.25 MiB/s [2024-11-20T07:25:04.869Z] 47282.33 IOPS, 184.70 MiB/s 00:15:00.937 Latency(us) 00:15:00.937 [2024-11-20T07:25:04.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.937 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x1000 00:15:00.937 Malloc0 : 5.12 1574.78 6.15 0.00 0.00 81145.26 529.44 208799.41 00:15:00.937 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x1000 length 0x1000 00:15:00.937 Malloc0 : 5.07 1566.58 6.12 0.00 0.00 81570.48 533.02 320525.41 00:15:00.937 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x800 00:15:00.937 Malloc1p0 : 5.12 799.61 3.12 0.00 0.00 159389.13 3076.47 199641.54 00:15:00.937 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x800 length 0x800 00:15:00.937 Malloc1p0 : 5.07 808.27 3.16 0.00 0.00 157675.48 3062.16 184988.95 00:15:00.937 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x800 00:15:00.937 Malloc1p1 : 5.12 799.33 3.12 0.00 0.00 159057.28 3133.71 196894.18 00:15:00.937 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x800 length 0x800 00:15:00.937 Malloc1p1 : 5.07 807.96 3.16 0.00 0.00 157358.49 3148.02 181325.81 00:15:00.937 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p0 : 5.13 799.06 3.12 0.00 0.00 158716.95 3233.87 190483.68 00:15:00.937 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p0 : 5.18 815.58 3.19 0.00 0.00 155529.36 3205.25 178578.45 00:15:00.937 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p1 : 5.13 798.78 3.12 0.00 0.00 158372.37 3105.09 185904.74 00:15:00.937 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p1 : 5.18 815.29 3.18 0.00 0.00 155203.56 3090.78 173999.51 00:15:00.937 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p2 : 5.13 798.51 3.12 0.00 0.00 158042.78 3090.78 181325.81 00:15:00.937 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p2 : 5.18 814.95 3.18 0.00 0.00 154885.42 3076.47 168504.79 00:15:00.937 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p3 : 5.13 798.24 3.12 0.00 0.00 157707.32 3090.78 177662.66 00:15:00.937 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p3 : 5.19 814.64 3.18 0.00 0.00 154565.27 3105.09 163925.86 00:15:00.937 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p4 : 5.13 797.97 3.12 0.00 0.00 157379.03 3019.23 173083.72 00:15:00.937 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p4 : 5.19 814.32 3.18 0.00 0.00 154244.46 3004.93 160262.71 00:15:00.937 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p5 : 5.20 812.32 3.17 0.00 0.00 154264.45 3090.78 169420.58 00:15:00.937 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p5 : 5.19 813.95 3.18 0.00 0.00 153935.33 3062.16 155683.77 00:15:00.937 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p6 : 5.20 812.04 3.17 0.00 0.00 153945.46 3105.09 164841.64 00:15:00.937 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p6 : 5.19 813.65 3.18 0.00 0.00 153629.65 3133.71 151104.84 00:15:00.937 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x200 00:15:00.937 Malloc2p7 : 5.20 811.78 3.17 0.00 0.00 153614.22 3233.87 160262.71 00:15:00.937 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x200 length 0x200 00:15:00.937 Malloc2p7 : 5.19 813.36 3.18 0.00 0.00 153294.37 3219.56 146525.90 00:15:00.937 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x1000 00:15:00.937 TestPT : 5.20 789.56 3.08 0.00 0.00 157028.76 14652.59 160262.71 00:15:00.937 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x1000 length 0x1000 00:15:00.937 TestPT : 5.22 787.49 3.08 0.00 0.00 157246.62 16827.58 214294.13 00:15:00.937 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x2000 00:15:00.937 raid0 : 5.21 811.38 3.17 0.00 0.00 152752.98 3348.35 141031.18 00:15:00.937 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x2000 length 0x2000 00:15:00.937 raid0 : 5.20 812.83 3.18 0.00 0.00 152443.45 3376.96 125462.81 00:15:00.937 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x2000 00:15:00.937 concat0 : 5.21 811.12 3.17 0.00 0.00 152464.67 3376.96 138283.82 00:15:00.937 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x2000 length 0x2000 00:15:00.937 concat0 : 5.23 832.27 3.25 0.00 0.00 148599.21 3362.66 120883.87 00:15:00.937 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x1000 00:15:00.937 raid1 : 5.21 810.81 3.17 0.00 0.00 152136.52 4235.51 132789.10 00:15:00.937 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x1000 length 0x1000 00:15:00.937 raid1 : 5.23 832.00 3.25 0.00 0.00 148281.84 4149.66 121799.66 00:15:00.937 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x0 length 0x4e2 00:15:00.937 AIO0 : 5.23 832.45 3.25 0.00 0.00 147744.27 2747.36 127294.38 00:15:00.937 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.937 Verification LBA range: start 0x4e2 length 0x4e2 00:15:00.937 AIO0 : 5.23 831.62 3.25 0.00 0.00 147938.87 1602.63 127294.38 00:15:00.937 [2024-11-20T07:25:04.870Z] =================================================================================================================== 00:15:00.937 [2024-11-20T07:25:04.870Z] Total : 27452.48 107.24 0.00 0.00 146265.39 529.44 320525.41 00:15:03.468 00:15:03.468 real 0m9.263s 00:15:03.468 user 0m17.089s 00:15:03.468 sys 0m0.569s 00:15:03.468 07:25:07 blockdev_general.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.468 07:25:07 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:03.468 ************************************ 00:15:03.468 END TEST bdev_verify 00:15:03.468 ************************************ 00:15:03.468 07:25:07 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:03.468 07:25:07 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:03.468 07:25:07 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.468 07:25:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:03.468 ************************************ 00:15:03.468 START TEST bdev_verify_big_io 00:15:03.468 ************************************ 00:15:03.468 07:25:07 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:03.728 [2024-11-20 07:25:07.397993] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:03.728 [2024-11-20 07:25:07.398115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72084 ] 00:15:03.728 [2024-11-20 07:25:07.574476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:03.990 [2024-11-20 07:25:07.711664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.990 [2024-11-20 07:25:07.711740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.556 [2024-11-20 07:25:08.180827] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:04.556 [2024-11-20 07:25:08.180894] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:04.556 [2024-11-20 07:25:08.188791] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:04.556 [2024-11-20 07:25:08.188842] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:04.556 [2024-11-20 07:25:08.196772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:04.556 [2024-11-20 07:25:08.196812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:04.556 [2024-11-20 07:25:08.196823] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:04.556 [2024-11-20 07:25:08.416656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:04.556 [2024-11-20 07:25:08.416725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.556 [2024-11-20 07:25:08.416744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:04.556 [2024-11-20 07:25:08.416754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.556 [2024-11-20 07:25:08.418914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.556 [2024-11-20 07:25:08.418963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:05.125 [2024-11-20 07:25:08.833902] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:05.125 [2024-11-20 07:25:08.838258] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:05.125 [2024-11-20 07:25:08.842611] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.846488] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.851167] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.855121] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.859462] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.863415] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.867650] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.871387] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.875847] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.879874] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.884107] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.888571] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.892858] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.897111] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:05.126 [2024-11-20 07:25:08.995040] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:05.126 [2024-11-20 07:25:09.002727] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:05.126 Running I/O for 5 seconds... 00:15:11.695 4412.00 IOPS, 275.75 MiB/s 00:15:11.695 Latency(us) 00:15:11.695 [2024-11-20T07:25:15.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.695 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x100 00:15:11.695 Malloc0 : 5.38 332.93 20.81 0.00 0.00 379224.17 679.69 1106270.57 00:15:11.695 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x100 length 0x100 00:15:11.695 Malloc0 : 5.67 293.23 18.33 0.00 0.00 431053.08 654.64 1311406.84 00:15:11.695 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x80 00:15:11.695 Malloc1p0 : 6.04 55.61 3.48 0.00 0.00 2124933.03 1709.95 3311485.43 00:15:11.695 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x80 length 0x80 00:15:11.695 Malloc1p0 : 5.79 156.95 9.81 0.00 0.00 778977.55 2775.98 1560500.88 00:15:11.695 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x80 00:15:11.695 Malloc1p1 : 6.09 57.82 3.61 0.00 0.00 2017980.72 1266.36 3208917.30 00:15:11.695 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x80 length 0x80 00:15:11.695 Malloc1p1 : 6.01 55.88 3.49 0.00 0.00 2113714.92 1302.13 3355443.20 00:15:11.695 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p0 : 5.70 44.91 2.81 0.00 0.00 658155.20 550.90 1208838.71 00:15:11.695 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p0 : 5.73 41.88 2.62 0.00 0.00 711457.38 547.33 1142902.05 00:15:11.695 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p1 : 5.70 44.90 2.81 0.00 0.00 654795.33 533.02 1194186.12 00:15:11.695 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p1 : 5.73 41.87 2.62 0.00 0.00 707536.06 536.59 1128249.46 00:15:11.695 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p2 : 5.70 44.89 2.81 0.00 0.00 651389.91 554.48 1179533.53 00:15:11.695 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p2 : 5.73 41.87 2.62 0.00 0.00 703859.82 550.90 1113596.87 00:15:11.695 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p3 : 5.75 47.31 2.96 0.00 0.00 619645.24 568.79 1157554.64 00:15:11.695 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p3 : 5.73 41.86 2.62 0.00 0.00 700187.08 543.75 1098944.28 00:15:11.695 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p4 : 5.75 47.30 2.96 0.00 0.00 616224.25 572.37 1142902.05 00:15:11.695 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p4 : 5.73 41.85 2.62 0.00 0.00 696756.69 550.90 1084291.69 00:15:11.695 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x0 length 0x20 00:15:11.695 Malloc2p5 : 5.75 47.29 2.96 0.00 0.00 612917.93 568.79 1128249.46 00:15:11.695 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.695 Verification LBA range: start 0x20 length 0x20 00:15:11.695 Malloc2p5 : 5.74 41.84 2.62 0.00 0.00 693220.06 543.75 1069639.10 00:15:11.696 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x20 00:15:11.696 Malloc2p6 : 5.75 47.28 2.95 0.00 0.00 609819.13 561.63 1113596.87 00:15:11.696 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x20 length 0x20 00:15:11.696 Malloc2p6 : 5.74 41.83 2.61 0.00 0.00 689495.47 550.90 1054986.51 00:15:11.696 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x20 00:15:11.696 Malloc2p7 : 5.75 47.27 2.95 0.00 0.00 606597.09 575.94 1098944.28 00:15:11.696 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x20 length 0x20 00:15:11.696 Malloc2p7 : 5.74 41.82 2.61 0.00 0.00 686070.87 579.52 1040333.92 00:15:11.696 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x100 00:15:11.696 TestPT : 6.12 60.16 3.76 0.00 0.00 1834318.02 1273.52 2989128.44 00:15:11.696 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x100 length 0x100 00:15:11.696 TestPT : 6.01 51.21 3.20 0.00 0.00 2165841.96 70057.70 2959823.26 00:15:11.696 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x200 00:15:11.696 raid0 : 6.11 62.82 3.93 0.00 0.00 1734405.99 1395.14 2886560.31 00:15:11.696 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x200 length 0x200 00:15:11.696 raid0 : 5.87 63.72 3.98 0.00 0.00 1739769.14 1352.22 3047738.80 00:15:11.696 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x200 00:15:11.696 concat0 : 6.05 80.67 5.04 0.00 0.00 1344212.80 1337.91 2783992.17 00:15:11.696 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x200 length 0x200 00:15:11.696 concat0 : 6.05 66.10 4.13 0.00 0.00 1629074.15 1345.06 2945170.67 00:15:11.696 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x100 00:15:11.696 raid1 : 6.09 89.27 5.58 0.00 0.00 1193780.61 1802.96 2681424.04 00:15:11.696 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x100 length 0x100 00:15:11.696 raid1 : 6.05 81.93 5.12 0.00 0.00 1303462.42 1681.33 2857255.13 00:15:11.696 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x0 length 0x4e 00:15:11.696 AIO0 : 6.11 84.06 5.25 0.00 0.00 760899.41 1373.68 1619111.24 00:15:11.696 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:15:11.696 Verification LBA range: start 0x4e length 0x4e 00:15:11.696 AIO0 : 6.09 80.79 5.05 0.00 0.00 794833.35 869.28 1670395.30 00:15:11.696 [2024-11-20T07:25:15.629Z] =================================================================================================================== 00:15:11.696 [2024-11-20T07:25:15.629Z] Total : 2379.10 148.69 0.00 0.00 938932.84 533.02 3355443.20 00:15:14.228 00:15:14.228 real 0m10.645s 00:15:14.228 user 0m19.933s 00:15:14.228 sys 0m0.520s 00:15:14.228 07:25:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.228 07:25:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.228 ************************************ 00:15:14.228 END TEST bdev_verify_big_io 00:15:14.228 ************************************ 00:15:14.228 07:25:18 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.228 07:25:18 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:14.228 07:25:18 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.228 07:25:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:14.228 ************************************ 00:15:14.228 START TEST bdev_write_zeroes 00:15:14.228 ************************************ 00:15:14.228 07:25:18 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.228 [2024-11-20 07:25:18.116974] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:14.228 [2024-11-20 07:25:18.117127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 00:15:14.488 [2024-11-20 07:25:18.302605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.747 [2024-11-20 07:25:18.442403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.005 [2024-11-20 07:25:18.911783] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:15.005 [2024-11-20 07:25:18.911847] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:15.005 [2024-11-20 07:25:18.919727] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:15.005 [2024-11-20 07:25:18.919775] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:15.005 [2024-11-20 07:25:18.927717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:15.005 [2024-11-20 07:25:18.927754] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:15.005 [2024-11-20 07:25:18.927769] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:15.265 [2024-11-20 07:25:19.146941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:15.265 [2024-11-20 07:25:19.147016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.265 [2024-11-20 07:25:19.147036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:15.265 [2024-11-20 07:25:19.147046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.265 [2024-11-20 07:25:19.149122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.265 [2024-11-20 07:25:19.149168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:15.870 Running I/O for 1 seconds... 00:15:16.809 100335.00 IOPS, 391.93 MiB/s 00:15:16.809 Latency(us) 00:15:16.809 [2024-11-20T07:25:20.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.809 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc0 : 1.05 6221.46 24.30 0.00 0.00 20561.58 522.28 43499.88 00:15:16.809 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc1p0 : 1.05 6213.56 24.27 0.00 0.00 20561.88 715.46 42584.09 00:15:16.809 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc1p1 : 1.05 6206.57 24.24 0.00 0.00 20549.38 686.84 41210.41 00:15:16.809 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p0 : 1.05 6199.78 24.22 0.00 0.00 20538.29 679.69 39836.73 00:15:16.809 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p1 : 1.05 6192.94 24.19 0.00 0.00 20532.86 751.23 38463.05 00:15:16.809 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p2 : 1.06 6186.15 24.16 0.00 0.00 20517.38 701.15 37089.37 00:15:16.809 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p3 : 1.06 6179.66 24.14 0.00 0.00 20506.12 715.46 35715.69 00:15:16.809 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p4 : 1.06 6172.81 24.11 0.00 0.00 20494.22 686.84 34113.06 00:15:16.809 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p5 : 1.06 6165.65 24.08 0.00 0.00 20484.15 686.84 34799.90 00:15:16.809 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p6 : 1.06 6159.19 24.06 0.00 0.00 20469.68 726.19 36173.58 00:15:16.809 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 Malloc2p7 : 1.06 6152.59 24.03 0.00 0.00 20458.76 693.99 37547.26 00:15:16.809 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 TestPT : 1.06 6146.07 24.01 0.00 0.00 20445.29 719.04 39149.89 00:15:16.809 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 raid0 : 1.06 6138.70 23.98 0.00 0.00 20429.62 1266.36 40523.57 00:15:16.809 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 concat0 : 1.06 6131.55 23.95 0.00 0.00 20390.29 1280.67 42126.20 00:15:16.809 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 raid1 : 1.07 6121.92 23.91 0.00 0.00 20355.83 2074.83 43728.82 00:15:16.809 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.809 AIO0 : 1.07 6100.61 23.83 0.00 0.00 20349.51 944.41 45102.50 00:15:16.809 [2024-11-20T07:25:20.742Z] =================================================================================================================== 00:15:16.809 [2024-11-20T07:25:20.742Z] Total : 98689.21 385.50 0.00 0.00 20477.82 522.28 45102.50 00:15:19.345 00:15:19.345 real 0m4.955s 00:15:19.345 user 0m4.371s 00:15:19.345 sys 0m0.434s 00:15:19.345 07:25:23 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.345 07:25:23 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:19.345 ************************************ 00:15:19.345 END TEST bdev_write_zeroes 00:15:19.345 ************************************ 00:15:19.345 07:25:23 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:19.345 07:25:23 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:19.345 07:25:23 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.345 07:25:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:19.345 ************************************ 00:15:19.345 START TEST bdev_json_nonenclosed 00:15:19.345 ************************************ 00:15:19.345 07:25:23 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:19.345 [2024-11-20 07:25:23.137600] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:19.345 [2024-11-20 07:25:23.137734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72285 ] 00:15:19.604 [2024-11-20 07:25:23.314586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.604 [2024-11-20 07:25:23.445776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.604 [2024-11-20 07:25:23.445876] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:19.604 [2024-11-20 07:25:23.445896] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:19.604 [2024-11-20 07:25:23.445907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:19.864 00:15:19.864 real 0m0.655s 00:15:19.864 user 0m0.441s 00:15:19.864 sys 0m0.113s 00:15:19.864 07:25:23 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.864 07:25:23 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:19.864 ************************************ 00:15:19.864 END TEST bdev_json_nonenclosed 00:15:19.864 ************************************ 00:15:20.123 07:25:23 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.123 07:25:23 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:20.123 07:25:23 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.123 07:25:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:20.123 ************************************ 00:15:20.123 START TEST bdev_json_nonarray 00:15:20.123 ************************************ 00:15:20.123 07:25:23 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.123 [2024-11-20 07:25:23.869770] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:20.123 [2024-11-20 07:25:23.869899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72316 ] 00:15:20.123 [2024-11-20 07:25:24.047780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.383 [2024-11-20 07:25:24.177266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.383 [2024-11-20 07:25:24.177365] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:20.383 [2024-11-20 07:25:24.177390] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:20.383 [2024-11-20 07:25:24.177400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.641 00:15:20.641 real 0m0.654s 00:15:20.641 user 0m0.425s 00:15:20.641 sys 0m0.128s 00:15:20.641 07:25:24 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.641 07:25:24 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:20.641 ************************************ 00:15:20.641 END TEST bdev_json_nonarray 00:15:20.641 ************************************ 00:15:20.641 07:25:24 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:15:20.641 07:25:24 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:15:20.641 07:25:24 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:20.641 07:25:24 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.641 07:25:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:20.641 ************************************ 00:15:20.641 START TEST bdev_qos 00:15:20.641 ************************************ 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1129 -- # qos_test_suite '' 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=72347 00:15:20.641 Process qos testing pid: 72347 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 72347' 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 72347 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # '[' -z 72347 ']' 00:15:20.641 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.642 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.642 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.642 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.642 07:25:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:20.900 [2024-11-20 07:25:24.590310] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:20.900 [2024-11-20 07:25:24.590477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72347 ] 00:15:20.900 [2024-11-20 07:25:24.758710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.158 [2024-11-20 07:25:24.893395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@868 -- # return 0 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.727 Malloc_0 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_0 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.727 [ 00:15:21.727 { 00:15:21.727 "name": "Malloc_0", 00:15:21.727 "aliases": [ 00:15:21.727 "970983ec-4c42-43ea-addf-81435f8fbcc2" 00:15:21.727 ], 00:15:21.727 "product_name": "Malloc disk", 00:15:21.727 "block_size": 512, 00:15:21.727 "num_blocks": 262144, 00:15:21.727 "uuid": "970983ec-4c42-43ea-addf-81435f8fbcc2", 00:15:21.727 "assigned_rate_limits": { 00:15:21.727 "rw_ios_per_sec": 0, 00:15:21.727 "rw_mbytes_per_sec": 0, 00:15:21.727 "r_mbytes_per_sec": 0, 00:15:21.727 "w_mbytes_per_sec": 0 00:15:21.727 }, 00:15:21.727 "claimed": false, 00:15:21.727 "zoned": false, 00:15:21.727 "supported_io_types": { 00:15:21.727 "read": true, 00:15:21.727 "write": true, 00:15:21.727 "unmap": true, 00:15:21.727 "flush": true, 00:15:21.727 "reset": true, 00:15:21.727 "nvme_admin": false, 00:15:21.727 "nvme_io": false, 00:15:21.727 "nvme_io_md": false, 00:15:21.727 "write_zeroes": true, 00:15:21.727 "zcopy": true, 00:15:21.727 "get_zone_info": false, 00:15:21.727 "zone_management": false, 00:15:21.727 "zone_append": false, 00:15:21.727 "compare": false, 00:15:21.727 "compare_and_write": false, 00:15:21.727 "abort": true, 00:15:21.727 "seek_hole": false, 00:15:21.727 "seek_data": false, 00:15:21.727 "copy": true, 00:15:21.727 "nvme_iov_md": false 00:15:21.727 }, 00:15:21.727 "memory_domains": [ 00:15:21.727 { 00:15:21.727 "dma_device_id": "system", 00:15:21.727 "dma_device_type": 1 00:15:21.727 }, 00:15:21.727 { 00:15:21.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.727 "dma_device_type": 2 00:15:21.727 } 00:15:21.727 ], 00:15:21.727 "driver_specific": {} 00:15:21.727 } 00:15:21.727 ] 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.727 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.988 Null_1 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Null_1 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:21.988 [ 00:15:21.988 { 00:15:21.988 "name": "Null_1", 00:15:21.988 "aliases": [ 00:15:21.988 "3fa3aa03-4779-495f-854a-4c8dc961cfee" 00:15:21.988 ], 00:15:21.988 "product_name": "Null disk", 00:15:21.988 "block_size": 512, 00:15:21.988 "num_blocks": 262144, 00:15:21.988 "uuid": "3fa3aa03-4779-495f-854a-4c8dc961cfee", 00:15:21.988 "assigned_rate_limits": { 00:15:21.988 "rw_ios_per_sec": 0, 00:15:21.988 "rw_mbytes_per_sec": 0, 00:15:21.988 "r_mbytes_per_sec": 0, 00:15:21.988 "w_mbytes_per_sec": 0 00:15:21.988 }, 00:15:21.988 "claimed": false, 00:15:21.988 "zoned": false, 00:15:21.988 "supported_io_types": { 00:15:21.988 "read": true, 00:15:21.988 "write": true, 00:15:21.988 "unmap": false, 00:15:21.988 "flush": false, 00:15:21.988 "reset": true, 00:15:21.988 "nvme_admin": false, 00:15:21.988 "nvme_io": false, 00:15:21.988 "nvme_io_md": false, 00:15:21.988 "write_zeroes": true, 00:15:21.988 "zcopy": false, 00:15:21.988 "get_zone_info": false, 00:15:21.988 "zone_management": false, 00:15:21.988 "zone_append": false, 00:15:21.988 "compare": false, 00:15:21.988 "compare_and_write": false, 00:15:21.988 "abort": true, 00:15:21.988 "seek_hole": false, 00:15:21.988 "seek_data": false, 00:15:21.988 "copy": false, 00:15:21.988 "nvme_iov_md": false 00:15:21.988 }, 00:15:21.988 "driver_specific": {} 00:15:21.988 } 00:15:21.988 ] 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:21.988 07:25:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:15:21.988 Running I/O for 60 seconds... 00:15:23.893 165888.00 IOPS, 648.00 MiB/s [2024-11-20T07:25:29.206Z] 165376.00 IOPS, 646.00 MiB/s [2024-11-20T07:25:30.141Z] 162986.67 IOPS, 636.67 MiB/s [2024-11-20T07:25:31.076Z] 162432.00 IOPS, 634.50 MiB/s [2024-11-20T07:25:31.076Z] 162713.60 IOPS, 635.60 MiB/s [2024-11-20T07:25:31.076Z] 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 80329.38 321317.53 0.00 0.00 325632.00 0.00 0.00 ' 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=80329.38 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 80329 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=80329 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=20000 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 20000 -gt 1000 ']' 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.143 07:25:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:27.143 ************************************ 00:15:27.143 START TEST bdev_qos_iops 00:15:27.143 ************************************ 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1129 -- # run_qos_test 20000 IOPS Malloc_0 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=20000 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:15:27.143 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:27.144 07:25:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:29.012 146740.00 IOPS, 573.20 MiB/s [2024-11-20T07:25:33.883Z] 133245.14 IOPS, 520.49 MiB/s [2024-11-20T07:25:34.822Z] 123230.00 IOPS, 481.37 MiB/s [2024-11-20T07:25:36.200Z] 115184.44 IOPS, 449.94 MiB/s [2024-11-20T07:25:36.200Z] 108754.00 IOPS, 424.82 MiB/s [2024-11-20T07:25:36.200Z] 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 20003.30 80013.20 0.00 0.00 81120.00 0.00 0.00 ' 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=20003.30 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 20003 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=20003 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=18000 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=22000 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 20003 -lt 18000 ']' 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 20003 -gt 22000 ']' 00:15:32.267 00:15:32.267 real 0m5.226s 00:15:32.267 user 0m0.131s 00:15:32.267 sys 0m0.045s 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.267 07:25:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:15:32.267 ************************************ 00:15:32.267 END TEST bdev_qos_iops 00:15:32.267 ************************************ 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:15:32.525 07:25:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:15:34.046 103467.64 IOPS, 404.17 MiB/s [2024-11-20T07:25:38.914Z] 99077.00 IOPS, 387.02 MiB/s [2024-11-20T07:25:39.861Z] 95654.15 IOPS, 373.65 MiB/s [2024-11-20T07:25:41.292Z] 92342.00 IOPS, 360.71 MiB/s [2024-11-20T07:25:41.551Z] 89514.67 IOPS, 349.67 MiB/s [2024-11-20T07:25:41.551Z] 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 29224.17 116896.67 0.00 0.00 118784.00 0.00 0.00 ' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=118784.00 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 118784 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=118784 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=11 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 11 -lt 2 ']' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.618 07:25:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:37.618 ************************************ 00:15:37.618 START TEST bdev_qos_bw 00:15:37.618 ************************************ 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1129 -- # run_qos_test 11 BANDWIDTH Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=11 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:15:37.618 07:25:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:15:39.120 86401.38 IOPS, 337.51 MiB/s [2024-11-20T07:25:43.991Z] 82666.59 IOPS, 322.92 MiB/s [2024-11-20T07:25:44.956Z] 79343.94 IOPS, 309.94 MiB/s [2024-11-20T07:25:45.896Z] 76366.32 IOPS, 298.31 MiB/s [2024-11-20T07:25:46.834Z] 73696.85 IOPS, 287.88 MiB/s [2024-11-20T07:25:46.834Z] 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2811.61 11246.43 0.00 0.00 11444.00 0.00 0.00 ' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=11444.00 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 11444 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=11444 00:15:42.901 ************************************ 00:15:42.901 END TEST bdev_qos_bw 00:15:42.901 ************************************ 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=11264 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=10137 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=12390 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 11444 -lt 10137 ']' 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 11444 -gt 12390 ']' 00:15:42.901 00:15:42.901 real 0m5.229s 00:15:42.901 user 0m0.121s 00:15:42.901 sys 0m0.039s 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.901 07:25:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:42.901 ************************************ 00:15:42.901 START TEST bdev_qos_ro_bw 00:15:42.901 ************************************ 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1129 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:15:42.901 07:25:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:44.282 71232.95 IOPS, 278.25 MiB/s [2024-11-20T07:25:49.150Z] 68146.36 IOPS, 266.20 MiB/s [2024-11-20T07:25:50.089Z] 65328.17 IOPS, 255.19 MiB/s [2024-11-20T07:25:51.027Z] 62744.83 IOPS, 245.10 MiB/s [2024-11-20T07:25:51.962Z] 60368.16 IOPS, 235.81 MiB/s [2024-11-20T07:25:52.220Z] 58174.38 IOPS, 227.24 MiB/s [2024-11-20T07:25:52.220Z] 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.84 2051.35 0.00 0.00 2068.00 0.00 0.00 ' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2068.00 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2068 00:15:48.287 ************************************ 00:15:48.287 END TEST bdev_qos_ro_bw 00:15:48.287 ************************************ 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2068 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2068 -lt 1843 ']' 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2068 -gt 2252 ']' 00:15:48.287 00:15:48.287 real 0m5.179s 00:15:48.287 user 0m0.116s 00:15:48.287 sys 0m0.047s 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.287 07:25:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 07:25:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:48.287 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.287 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:48.854 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.854 07:25:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:15:48.854 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.854 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:48.854 00:15:48.854 Latency(us) 00:15:48.854 [2024-11-20T07:25:52.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.854 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:48.854 Malloc_0 : 26.79 27501.68 107.43 0.00 0.00 9219.20 1917.43 505514.37 00:15:48.854 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:48.854 Null_1 : 27.00 28846.07 112.68 0.00 0.00 8857.55 615.29 213378.35 00:15:48.854 [2024-11-20T07:25:52.787Z] =================================================================================================================== 00:15:48.854 [2024-11-20T07:25:52.787Z] Total : 56347.75 220.11 0.00 0.00 9033.34 615.29 505514.37 00:15:48.854 { 00:15:48.854 "results": [ 00:15:48.854 { 00:15:48.854 "job": "Malloc_0", 00:15:48.855 "core_mask": "0x2", 00:15:48.855 "workload": "randread", 00:15:48.855 "status": "finished", 00:15:48.855 "queue_depth": 256, 00:15:48.855 "io_size": 4096, 00:15:48.855 "runtime": 26.786111, 00:15:48.855 "iops": 27501.678015147478, 00:15:48.855 "mibps": 107.42842974666983, 00:15:48.855 "io_failed": 0, 00:15:48.855 "io_timeout": 0, 00:15:48.855 "avg_latency_us": 9219.202477453104, 00:15:48.855 "min_latency_us": 1917.4288209606987, 00:15:48.855 "max_latency_us": 505514.36855895194 00:15:48.855 }, 00:15:48.855 { 00:15:48.855 "job": "Null_1", 00:15:48.855 "core_mask": "0x2", 00:15:48.855 "workload": "randread", 00:15:48.855 "status": "finished", 00:15:48.855 "queue_depth": 256, 00:15:48.855 "io_size": 4096, 00:15:48.855 "runtime": 27.001701, 00:15:48.855 "iops": 28846.07158637895, 00:15:48.855 "mibps": 112.67996713429277, 00:15:48.855 "io_failed": 0, 00:15:48.855 "io_timeout": 0, 00:15:48.855 "avg_latency_us": 8857.550704141484, 00:15:48.855 "min_latency_us": 615.2943231441049, 00:15:48.855 "max_latency_us": 213378.34759825328 00:15:48.855 } 00:15:48.855 ], 00:15:48.855 "core_count": 1 00:15:48.855 } 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 72347 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' -z 72347 ']' 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # kill -0 72347 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # uname 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.855 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72347 00:15:49.114 killing process with pid 72347 00:15:49.114 Received shutdown signal, test time was about 27.049486 seconds 00:15:49.114 00:15:49.114 Latency(us) 00:15:49.114 [2024-11-20T07:25:53.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.114 [2024-11-20T07:25:53.047Z] =================================================================================================================== 00:15:49.114 [2024-11-20T07:25:53.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:49.114 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:49.114 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:49.114 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72347' 00:15:49.114 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # kill 72347 00:15:49.114 07:25:52 blockdev_general.bdev_qos -- common/autotest_common.sh@978 -- # wait 72347 00:15:50.491 07:25:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:15:50.491 00:15:50.491 real 0m29.709s 00:15:50.491 user 0m30.399s 00:15:50.491 sys 0m0.832s 00:15:50.491 07:25:54 blockdev_general.bdev_qos -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.491 ************************************ 00:15:50.491 END TEST bdev_qos 00:15:50.491 ************************************ 00:15:50.491 07:25:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 07:25:54 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:50.491 07:25:54 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.491 07:25:54 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.491 07:25:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 ************************************ 00:15:50.491 START TEST bdev_qd_sampling 00:15:50.491 ************************************ 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1129 -- # qd_sampling_test_suite '' 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=72771 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 72771' 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:50.491 Process bdev QD sampling period testing pid: 72771 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 72771 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # '[' -z 72771 ']' 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.491 07:25:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 [2024-11-20 07:25:54.368588] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:50.491 [2024-11-20 07:25:54.368763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:15:50.750 [2024-11-20 07:25:54.529400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:51.011 [2024-11-20 07:25:54.701918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.011 [2024-11-20 07:25:54.701966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@868 -- # return 0 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:51.581 Malloc_QD 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_QD 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # local i 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.581 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:51.839 [ 00:15:51.839 { 00:15:51.839 "name": "Malloc_QD", 00:15:51.839 "aliases": [ 00:15:51.839 "07520671-34cb-4007-8a9d-3b6c68ef66d2" 00:15:51.839 ], 00:15:51.839 "product_name": "Malloc disk", 00:15:51.839 "block_size": 512, 00:15:51.839 "num_blocks": 262144, 00:15:51.839 "uuid": "07520671-34cb-4007-8a9d-3b6c68ef66d2", 00:15:51.839 "assigned_rate_limits": { 00:15:51.839 "rw_ios_per_sec": 0, 00:15:51.839 "rw_mbytes_per_sec": 0, 00:15:51.839 "r_mbytes_per_sec": 0, 00:15:51.839 "w_mbytes_per_sec": 0 00:15:51.839 }, 00:15:51.839 "claimed": false, 00:15:51.839 "zoned": false, 00:15:51.839 "supported_io_types": { 00:15:51.839 "read": true, 00:15:51.839 "write": true, 00:15:51.839 "unmap": true, 00:15:51.839 "flush": true, 00:15:51.839 "reset": true, 00:15:51.839 "nvme_admin": false, 00:15:51.839 "nvme_io": false, 00:15:51.839 "nvme_io_md": false, 00:15:51.839 "write_zeroes": true, 00:15:51.839 "zcopy": true, 00:15:51.839 "get_zone_info": false, 00:15:51.839 "zone_management": false, 00:15:51.839 "zone_append": false, 00:15:51.839 "compare": false, 00:15:51.839 "compare_and_write": false, 00:15:51.839 "abort": true, 00:15:51.839 "seek_hole": false, 00:15:51.839 "seek_data": false, 00:15:51.839 "copy": true, 00:15:51.839 "nvme_iov_md": false 00:15:51.839 }, 00:15:51.839 "memory_domains": [ 00:15:51.839 { 00:15:51.839 "dma_device_id": "system", 00:15:51.839 "dma_device_type": 1 00:15:51.839 }, 00:15:51.839 { 00:15:51.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.839 "dma_device_type": 2 00:15:51.839 } 00:15:51.839 ], 00:15:51.839 "driver_specific": {} 00:15:51.839 } 00:15:51.839 ] 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@911 -- # return 0 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:15:51.839 07:25:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:51.840 Running I/O for 5 seconds... 00:15:53.712 61184.00 IOPS, 239.00 MiB/s [2024-11-20T07:25:57.645Z] 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:15:53.712 "tick_rate": 2290000000, 00:15:53.712 "ticks": 1661037498448, 00:15:53.712 "bdevs": [ 00:15:53.712 { 00:15:53.712 "name": "Malloc_QD", 00:15:53.712 "bytes_read": 493916672, 00:15:53.712 "num_read_ops": 120579, 00:15:53.712 "bytes_written": 0, 00:15:53.712 "num_write_ops": 0, 00:15:53.712 "bytes_unmapped": 0, 00:15:53.712 "num_unmap_ops": 0, 00:15:53.712 "bytes_copied": 0, 00:15:53.712 "num_copy_ops": 0, 00:15:53.712 "read_latency_ticks": 2268924157520, 00:15:53.712 "max_read_latency_ticks": 35142066, 00:15:53.712 "min_read_latency_ticks": 360838, 00:15:53.712 "write_latency_ticks": 0, 00:15:53.712 "max_write_latency_ticks": 0, 00:15:53.712 "min_write_latency_ticks": 0, 00:15:53.712 "unmap_latency_ticks": 0, 00:15:53.712 "max_unmap_latency_ticks": 0, 00:15:53.712 "min_unmap_latency_ticks": 0, 00:15:53.712 "copy_latency_ticks": 0, 00:15:53.712 "max_copy_latency_ticks": 0, 00:15:53.712 "min_copy_latency_ticks": 0, 00:15:53.712 "io_error": {}, 00:15:53.712 "queue_depth_polling_period": 10, 00:15:53.712 "queue_depth": 768, 00:15:53.712 "io_time": 30, 00:15:53.712 "weighted_io_time": 23040 00:15:53.712 } 00:15:53.712 ] 00:15:53.712 }' 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.712 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:53.712 00:15:53.712 Latency(us) 00:15:53.712 [2024-11-20T07:25:57.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.712 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:53.712 Malloc_QD : 2.00 43494.80 169.90 0.00 0.00 5869.22 1574.01 7125.97 00:15:53.712 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:53.712 Malloc_QD : 2.00 18707.80 73.08 0.00 0.00 13630.79 944.41 15453.90 00:15:53.712 [2024-11-20T07:25:57.645Z] =================================================================================================================== 00:15:53.712 [2024-11-20T07:25:57.645Z] Total : 62202.60 242.98 0.00 0.00 8205.69 944.41 15453.90 00:15:53.971 { 00:15:53.971 "results": [ 00:15:53.971 { 00:15:53.971 "job": "Malloc_QD", 00:15:53.971 "core_mask": "0x1", 00:15:53.971 "workload": "randread", 00:15:53.971 "status": "finished", 00:15:53.971 "queue_depth": 256, 00:15:53.971 "io_size": 4096, 00:15:53.971 "runtime": 1.995273, 00:15:53.971 "iops": 43494.79995970476, 00:15:53.971 "mibps": 169.90156234259672, 00:15:53.971 "io_failed": 0, 00:15:53.971 "io_timeout": 0, 00:15:53.971 "avg_latency_us": 5869.218801767335, 00:15:53.971 "min_latency_us": 1574.0087336244542, 00:15:53.971 "max_latency_us": 7125.966812227074 00:15:53.971 }, 00:15:53.971 { 00:15:53.971 "job": "Malloc_QD", 00:15:53.971 "core_mask": "0x2", 00:15:53.971 "workload": "randread", 00:15:53.971 "status": "finished", 00:15:53.971 "queue_depth": 256, 00:15:53.971 "io_size": 4096, 00:15:53.971 "runtime": 1.997883, 00:15:53.971 "iops": 18707.802208637844, 00:15:53.971 "mibps": 73.07735237749158, 00:15:53.971 "io_failed": 0, 00:15:53.971 "io_timeout": 0, 00:15:53.971 "avg_latency_us": 13630.789256445534, 00:15:53.971 "min_latency_us": 944.4052401746725, 00:15:53.971 "max_latency_us": 15453.903930131004 00:15:53.971 } 00:15:53.971 ], 00:15:53.971 "core_count": 2 00:15:53.971 } 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 72771 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' -z 72771 ']' 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # kill -0 72771 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # uname 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72771 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.971 killing process with pid 72771 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72771' 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # kill 72771 00:15:53.971 Received shutdown signal, test time was about 2.205856 seconds 00:15:53.971 00:15:53.971 Latency(us) 00:15:53.971 [2024-11-20T07:25:57.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.971 [2024-11-20T07:25:57.904Z] =================================================================================================================== 00:15:53.971 [2024-11-20T07:25:57.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.971 07:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@978 -- # wait 72771 00:15:55.976 07:25:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:15:55.976 00:15:55.976 real 0m5.335s 00:15:55.976 user 0m9.884s 00:15:55.976 sys 0m0.584s 00:15:55.976 07:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.976 07:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:55.976 ************************************ 00:15:55.976 END TEST bdev_qd_sampling 00:15:55.976 ************************************ 00:15:55.976 07:25:59 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:15:55.976 07:25:59 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.976 07:25:59 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.976 07:25:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:55.976 ************************************ 00:15:55.976 START TEST bdev_error 00:15:55.976 ************************************ 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@1129 -- # error_test_suite '' 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=72854 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 72854' 00:15:55.976 Process error testing pid: 72854 00:15:55.976 07:25:59 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 72854 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 72854 ']' 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.976 07:25:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:55.976 [2024-11-20 07:25:59.771565] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:15:55.976 [2024-11-20 07:25:59.771774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72854 ] 00:15:56.234 [2024-11-20 07:25:59.968564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.234 [2024-11-20 07:26:00.099185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.803 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.803 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0 00:15:56.803 07:26:00 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:56.803 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.803 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 Dev_1 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 [ 00:15:57.061 { 00:15:57.061 "name": "Dev_1", 00:15:57.061 "aliases": [ 00:15:57.061 "c0b678f3-95f7-4c77-a44d-3add33fe9b71" 00:15:57.061 ], 00:15:57.061 "product_name": "Malloc disk", 00:15:57.061 "block_size": 512, 00:15:57.061 "num_blocks": 262144, 00:15:57.061 "uuid": "c0b678f3-95f7-4c77-a44d-3add33fe9b71", 00:15:57.061 "assigned_rate_limits": { 00:15:57.061 "rw_ios_per_sec": 0, 00:15:57.061 "rw_mbytes_per_sec": 0, 00:15:57.061 "r_mbytes_per_sec": 0, 00:15:57.061 "w_mbytes_per_sec": 0 00:15:57.061 }, 00:15:57.061 "claimed": false, 00:15:57.061 "zoned": false, 00:15:57.061 "supported_io_types": { 00:15:57.061 "read": true, 00:15:57.061 "write": true, 00:15:57.061 "unmap": true, 00:15:57.061 "flush": true, 00:15:57.061 "reset": true, 00:15:57.061 "nvme_admin": false, 00:15:57.061 "nvme_io": false, 00:15:57.061 "nvme_io_md": false, 00:15:57.061 "write_zeroes": true, 00:15:57.061 "zcopy": true, 00:15:57.061 "get_zone_info": false, 00:15:57.061 "zone_management": false, 00:15:57.061 "zone_append": false, 00:15:57.061 "compare": false, 00:15:57.061 "compare_and_write": false, 00:15:57.061 "abort": true, 00:15:57.061 "seek_hole": false, 00:15:57.061 "seek_data": false, 00:15:57.061 "copy": true, 00:15:57.061 "nvme_iov_md": false 00:15:57.061 }, 00:15:57.061 "memory_domains": [ 00:15:57.061 { 00:15:57.061 "dma_device_id": "system", 00:15:57.061 "dma_device_type": 1 00:15:57.061 }, 00:15:57.061 { 00:15:57.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.061 "dma_device_type": 2 00:15:57.061 } 00:15:57.061 ], 00:15:57.061 "driver_specific": {} 00:15:57.061 } 00:15:57.061 ] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0 00:15:57.061 07:26:00 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 true 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.061 07:26:00 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.061 07:26:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.320 Dev_2 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.320 07:26:01 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.320 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.321 [ 00:15:57.321 { 00:15:57.321 "name": "Dev_2", 00:15:57.321 "aliases": [ 00:15:57.321 "7dd8e7a5-5ebb-4c9c-9ee7-175f0a60cb75" 00:15:57.321 ], 00:15:57.321 "product_name": "Malloc disk", 00:15:57.321 "block_size": 512, 00:15:57.321 "num_blocks": 262144, 00:15:57.321 "uuid": "7dd8e7a5-5ebb-4c9c-9ee7-175f0a60cb75", 00:15:57.321 "assigned_rate_limits": { 00:15:57.321 "rw_ios_per_sec": 0, 00:15:57.321 "rw_mbytes_per_sec": 0, 00:15:57.321 "r_mbytes_per_sec": 0, 00:15:57.321 "w_mbytes_per_sec": 0 00:15:57.321 }, 00:15:57.321 "claimed": false, 00:15:57.321 "zoned": false, 00:15:57.321 "supported_io_types": { 00:15:57.321 "read": true, 00:15:57.321 "write": true, 00:15:57.321 "unmap": true, 00:15:57.321 "flush": true, 00:15:57.321 "reset": true, 00:15:57.321 "nvme_admin": false, 00:15:57.321 "nvme_io": false, 00:15:57.321 "nvme_io_md": false, 00:15:57.321 "write_zeroes": true, 00:15:57.321 "zcopy": true, 00:15:57.321 "get_zone_info": false, 00:15:57.321 "zone_management": false, 00:15:57.321 "zone_append": false, 00:15:57.321 "compare": false, 00:15:57.321 "compare_and_write": false, 00:15:57.321 "abort": true, 00:15:57.321 "seek_hole": false, 00:15:57.321 "seek_data": false, 00:15:57.321 "copy": true, 00:15:57.321 "nvme_iov_md": false 00:15:57.321 }, 00:15:57.321 "memory_domains": [ 00:15:57.321 { 00:15:57.321 "dma_device_id": "system", 00:15:57.321 "dma_device_type": 1 00:15:57.321 }, 00:15:57.321 { 00:15:57.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.321 "dma_device_type": 2 00:15:57.321 } 00:15:57.321 ], 00:15:57.321 "driver_specific": {} 00:15:57.321 } 00:15:57.321 ] 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0 00:15:57.321 07:26:01 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:57.321 07:26:01 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.321 07:26:01 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:15:57.321 07:26:01 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:57.321 Running I/O for 5 seconds... 00:15:58.254 Process is existed as continue on error is set. Pid: 72854 00:15:58.254 07:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 72854 00:15:58.254 07:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 72854' 00:15:58.254 07:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:58.254 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.254 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:58.254 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.254 07:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:58.254 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.254 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 Timeout while waiting for response: 00:15:58.513 00:15:58.513 00:15:58.771 07:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.771 07:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:15:59.720 84139.00 IOPS, 328.67 MiB/s [2024-11-20T07:26:04.591Z] 96757.50 IOPS, 377.96 MiB/s [2024-11-20T07:26:05.527Z] 101134.33 IOPS, 395.06 MiB/s [2024-11-20T07:26:06.462Z] 103914.75 IOPS, 405.92 MiB/s 00:16:02.529 Latency(us) 00:16:02.529 [2024-11-20T07:26:06.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.529 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:02.529 EE_Dev_1 : 0.89 46544.19 181.81 5.61 0.00 341.28 126.99 618.87 00:16:02.529 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:02.529 Dev_2 : 5.00 91425.96 357.13 0.00 0.00 172.55 51.65 379135.78 00:16:02.529 [2024-11-20T07:26:06.462Z] =================================================================================================================== 00:16:02.529 [2024-11-20T07:26:06.462Z] Total : 137970.15 538.95 5.61 0.00 186.60 51.65 379135.78 00:16:03.904 07:26:07 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 72854 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' -z 72854 ']' 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # kill -0 72854 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # uname 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72854 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:03.905 killing process with pid 72854 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72854' 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # kill 72854 00:16:03.905 Received shutdown signal, test time was about 5.000000 seconds 00:16:03.905 00:16:03.905 Latency(us) 00:16:03.905 [2024-11-20T07:26:07.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.905 [2024-11-20T07:26:07.838Z] =================================================================================================================== 00:16:03.905 [2024-11-20T07:26:07.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.905 07:26:07 blockdev_general.bdev_error -- common/autotest_common.sh@978 -- # wait 72854 00:16:05.281 07:26:09 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=72966 00:16:05.281 07:26:09 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:16:05.281 Process error testing pid: 72966 00:16:05.281 07:26:09 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 72966' 00:16:05.281 07:26:09 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 72966 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 72966 ']' 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.281 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.281 [2024-11-20 07:26:09.165737] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:05.281 [2024-11-20 07:26:09.165855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72966 ] 00:16:05.539 [2024-11-20 07:26:09.340246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.797 [2024-11-20 07:26:09.472743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.367 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.368 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0 00:16:06.368 07:26:09 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:16:06.368 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 07:26:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 Dev_1 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 [ 00:16:06.368 { 00:16:06.368 "name": "Dev_1", 00:16:06.368 "aliases": [ 00:16:06.368 "cb7aecbd-c99e-4377-8332-3978cf46c927" 00:16:06.368 ], 00:16:06.368 "product_name": "Malloc disk", 00:16:06.368 "block_size": 512, 00:16:06.368 "num_blocks": 262144, 00:16:06.368 "uuid": "cb7aecbd-c99e-4377-8332-3978cf46c927", 00:16:06.368 "assigned_rate_limits": { 00:16:06.368 "rw_ios_per_sec": 0, 00:16:06.368 "rw_mbytes_per_sec": 0, 00:16:06.368 "r_mbytes_per_sec": 0, 00:16:06.368 "w_mbytes_per_sec": 0 00:16:06.368 }, 00:16:06.368 "claimed": false, 00:16:06.368 "zoned": false, 00:16:06.368 "supported_io_types": { 00:16:06.368 "read": true, 00:16:06.368 "write": true, 00:16:06.368 "unmap": true, 00:16:06.368 "flush": true, 00:16:06.368 "reset": true, 00:16:06.368 "nvme_admin": false, 00:16:06.368 "nvme_io": false, 00:16:06.368 "nvme_io_md": false, 00:16:06.368 "write_zeroes": true, 00:16:06.368 "zcopy": true, 00:16:06.368 "get_zone_info": false, 00:16:06.368 "zone_management": false, 00:16:06.368 "zone_append": false, 00:16:06.368 "compare": false, 00:16:06.368 "compare_and_write": false, 00:16:06.368 "abort": true, 00:16:06.368 "seek_hole": false, 00:16:06.368 "seek_data": false, 00:16:06.368 "copy": true, 00:16:06.368 "nvme_iov_md": false 00:16:06.368 }, 00:16:06.368 "memory_domains": [ 00:16:06.368 { 00:16:06.368 "dma_device_id": "system", 00:16:06.368 "dma_device_type": 1 00:16:06.368 }, 00:16:06.368 { 00:16:06.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.368 "dma_device_type": 2 00:16:06.368 } 00:16:06.368 ], 00:16:06.368 "driver_specific": {} 00:16:06.368 } 00:16:06.368 ] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0 00:16:06.368 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 true 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 Dev_2 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 [ 00:16:06.628 { 00:16:06.628 "name": "Dev_2", 00:16:06.628 "aliases": [ 00:16:06.628 "37be43b6-2829-4710-9734-d13d788f10b9" 00:16:06.628 ], 00:16:06.628 "product_name": "Malloc disk", 00:16:06.628 "block_size": 512, 00:16:06.628 "num_blocks": 262144, 00:16:06.628 "uuid": "37be43b6-2829-4710-9734-d13d788f10b9", 00:16:06.628 "assigned_rate_limits": { 00:16:06.628 "rw_ios_per_sec": 0, 00:16:06.628 "rw_mbytes_per_sec": 0, 00:16:06.628 "r_mbytes_per_sec": 0, 00:16:06.628 "w_mbytes_per_sec": 0 00:16:06.628 }, 00:16:06.628 "claimed": false, 00:16:06.628 "zoned": false, 00:16:06.628 "supported_io_types": { 00:16:06.628 "read": true, 00:16:06.628 "write": true, 00:16:06.628 "unmap": true, 00:16:06.628 "flush": true, 00:16:06.628 "reset": true, 00:16:06.628 "nvme_admin": false, 00:16:06.628 "nvme_io": false, 00:16:06.628 "nvme_io_md": false, 00:16:06.628 "write_zeroes": true, 00:16:06.628 "zcopy": true, 00:16:06.628 "get_zone_info": false, 00:16:06.628 "zone_management": false, 00:16:06.628 "zone_append": false, 00:16:06.628 "compare": false, 00:16:06.628 "compare_and_write": false, 00:16:06.628 "abort": true, 00:16:06.628 "seek_hole": false, 00:16:06.628 "seek_data": false, 00:16:06.628 "copy": true, 00:16:06.628 "nvme_iov_md": false 00:16:06.628 }, 00:16:06.628 "memory_domains": [ 00:16:06.628 { 00:16:06.628 "dma_device_id": "system", 00:16:06.628 "dma_device_type": 1 00:16:06.628 }, 00:16:06.628 { 00:16:06.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.628 "dma_device_type": 2 00:16:06.628 } 00:16:06.628 ], 00:16:06.628 "driver_specific": {} 00:16:06.628 } 00:16:06.628 ] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0 00:16:06.628 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.628 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 72966 00:16:06.628 07:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # local es=0 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # valid_exec_arg wait 72966 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # local arg=wait 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # type -t wait 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.628 07:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # wait 72966 00:16:06.628 Running I/O for 5 seconds... 00:16:06.628 task offset: 83512 on job bdev=EE_Dev_1 fails 00:16:06.628 00:16:06.629 Latency(us) 00:16:06.629 [2024-11-20T07:26:10.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.629 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:06.629 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:16:06.629 EE_Dev_1 : 0.00 30640.67 119.69 6963.79 0.00 351.88 131.47 636.76 00:16:06.629 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:06.629 Dev_2 : 0.00 21993.13 85.91 0.00 0.00 510.49 142.20 922.94 00:16:06.629 [2024-11-20T07:26:10.562Z] =================================================================================================================== 00:16:06.629 [2024-11-20T07:26:10.562Z] Total : 52633.80 205.60 6963.79 0.00 437.91 131.47 922.94 00:16:06.629 request: 00:16:06.629 { 00:16:06.629 "method": "perform_tests", 00:16:06.629 "req_id": 1 00:16:06.629 } 00:16:06.629 Got JSON-RPC error response 00:16:06.629 response: 00:16:06.629 { 00:16:06.629 "code": -32603, 00:16:06.629 "message": "bdevperf failed with error Operation not permitted" 00:16:06.629 } 00:16:06.629 [2024-11-20 07:26:10.515136] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # es=255 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@664 -- # es=127 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@665 -- # case "$es" in 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@672 -- # es=1 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.168 00:16:09.168 real 0m12.859s 00:16:09.168 user 0m12.909s 00:16:09.168 sys 0m0.927s 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.168 07:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:09.168 ************************************ 00:16:09.168 END TEST bdev_error 00:16:09.168 ************************************ 00:16:09.168 07:26:12 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:16:09.168 07:26:12 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.168 07:26:12 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.168 07:26:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:16:09.168 ************************************ 00:16:09.168 START TEST bdev_stat 00:16:09.168 ************************************ 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@1129 -- # stat_test_suite '' 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=73024 00:16:09.168 Process Bdev IO statistics testing pid: 73024 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 73024' 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 73024 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # '[' -z 73024 ']' 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.168 07:26:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:09.168 [2024-11-20 07:26:12.679099] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:09.168 [2024-11-20 07:26:12.679249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73024 ] 00:16:09.168 [2024-11-20 07:26:12.921435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.168 [2024-11-20 07:26:13.047095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.168 [2024-11-20 07:26:13.047130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.738 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.738 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@868 -- # return 0 00:16:09.738 07:26:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:16:09.738 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.738 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:09.998 Malloc_STAT 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_STAT 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # local i 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:09.998 [ 00:16:09.998 { 00:16:09.998 "name": "Malloc_STAT", 00:16:09.998 "aliases": [ 00:16:09.998 "fe569e4e-12e2-4647-909e-9bb97ab52ff5" 00:16:09.998 ], 00:16:09.998 "product_name": "Malloc disk", 00:16:09.998 "block_size": 512, 00:16:09.998 "num_blocks": 262144, 00:16:09.998 "uuid": "fe569e4e-12e2-4647-909e-9bb97ab52ff5", 00:16:09.998 "assigned_rate_limits": { 00:16:09.998 "rw_ios_per_sec": 0, 00:16:09.998 "rw_mbytes_per_sec": 0, 00:16:09.998 "r_mbytes_per_sec": 0, 00:16:09.998 "w_mbytes_per_sec": 0 00:16:09.998 }, 00:16:09.998 "claimed": false, 00:16:09.998 "zoned": false, 00:16:09.998 "supported_io_types": { 00:16:09.998 "read": true, 00:16:09.998 "write": true, 00:16:09.998 "unmap": true, 00:16:09.998 "flush": true, 00:16:09.998 "reset": true, 00:16:09.998 "nvme_admin": false, 00:16:09.998 "nvme_io": false, 00:16:09.998 "nvme_io_md": false, 00:16:09.998 "write_zeroes": true, 00:16:09.998 "zcopy": true, 00:16:09.998 "get_zone_info": false, 00:16:09.998 "zone_management": false, 00:16:09.998 "zone_append": false, 00:16:09.998 "compare": false, 00:16:09.998 "compare_and_write": false, 00:16:09.998 "abort": true, 00:16:09.998 "seek_hole": false, 00:16:09.998 "seek_data": false, 00:16:09.998 "copy": true, 00:16:09.998 "nvme_iov_md": false 00:16:09.998 }, 00:16:09.998 "memory_domains": [ 00:16:09.998 { 00:16:09.998 "dma_device_id": "system", 00:16:09.998 "dma_device_type": 1 00:16:09.998 }, 00:16:09.998 { 00:16:09.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.998 "dma_device_type": 2 00:16:09.998 } 00:16:09.998 ], 00:16:09.998 "driver_specific": {} 00:16:09.998 } 00:16:09.998 ] 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- common/autotest_common.sh@911 -- # return 0 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:16:09.998 07:26:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:09.998 Running I/O for 10 seconds... 00:16:11.875 111872.00 IOPS, 437.00 MiB/s [2024-11-20T07:26:15.808Z] 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:16:11.875 "tick_rate": 2290000000, 00:16:11.875 "ticks": 1702703517410, 00:16:11.875 "bdevs": [ 00:16:11.875 { 00:16:11.875 "name": "Malloc_STAT", 00:16:11.875 "bytes_read": 872452608, 00:16:11.875 "num_read_ops": 212995, 00:16:11.875 "bytes_written": 0, 00:16:11.875 "num_write_ops": 0, 00:16:11.875 "bytes_unmapped": 0, 00:16:11.875 "num_unmap_ops": 0, 00:16:11.875 "bytes_copied": 0, 00:16:11.875 "num_copy_ops": 0, 00:16:11.875 "read_latency_ticks": 2234901012348, 00:16:11.875 "max_read_latency_ticks": 13861632, 00:16:11.875 "min_read_latency_ticks": 268020, 00:16:11.875 "write_latency_ticks": 0, 00:16:11.875 "max_write_latency_ticks": 0, 00:16:11.875 "min_write_latency_ticks": 0, 00:16:11.875 "unmap_latency_ticks": 0, 00:16:11.875 "max_unmap_latency_ticks": 0, 00:16:11.875 "min_unmap_latency_ticks": 0, 00:16:11.875 "copy_latency_ticks": 0, 00:16:11.875 "max_copy_latency_ticks": 0, 00:16:11.875 "min_copy_latency_ticks": 0, 00:16:11.875 "io_error": {} 00:16:11.875 } 00:16:11.875 ] 00:16:11.875 }' 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=212995 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:16:11.875 "tick_rate": 2290000000, 00:16:11.875 "ticks": 1702766802578, 00:16:11.875 "name": "Malloc_STAT", 00:16:11.875 "channels": [ 00:16:11.875 { 00:16:11.875 "thread_id": 2, 00:16:11.875 "bytes_read": 442499072, 00:16:11.875 "num_read_ops": 108032, 00:16:11.875 "bytes_written": 0, 00:16:11.875 "num_write_ops": 0, 00:16:11.875 "bytes_unmapped": 0, 00:16:11.875 "num_unmap_ops": 0, 00:16:11.875 "bytes_copied": 0, 00:16:11.875 "num_copy_ops": 0, 00:16:11.875 "read_latency_ticks": 1133168814420, 00:16:11.875 "max_read_latency_ticks": 13861632, 00:16:11.875 "min_read_latency_ticks": 7217748, 00:16:11.875 "write_latency_ticks": 0, 00:16:11.875 "max_write_latency_ticks": 0, 00:16:11.875 "min_write_latency_ticks": 0, 00:16:11.875 "unmap_latency_ticks": 0, 00:16:11.875 "max_unmap_latency_ticks": 0, 00:16:11.875 "min_unmap_latency_ticks": 0, 00:16:11.875 "copy_latency_ticks": 0, 00:16:11.875 "max_copy_latency_ticks": 0, 00:16:11.875 "min_copy_latency_ticks": 0 00:16:11.875 }, 00:16:11.875 { 00:16:11.875 "thread_id": 3, 00:16:11.875 "bytes_read": 442499072, 00:16:11.875 "num_read_ops": 108032, 00:16:11.875 "bytes_written": 0, 00:16:11.875 "num_write_ops": 0, 00:16:11.875 "bytes_unmapped": 0, 00:16:11.875 "num_unmap_ops": 0, 00:16:11.875 "bytes_copied": 0, 00:16:11.875 "num_copy_ops": 0, 00:16:11.875 "read_latency_ticks": 1134070585530, 00:16:11.875 "max_read_latency_ticks": 13629138, 00:16:11.875 "min_read_latency_ticks": 7155204, 00:16:11.875 "write_latency_ticks": 0, 00:16:11.875 "max_write_latency_ticks": 0, 00:16:11.875 "min_write_latency_ticks": 0, 00:16:11.875 "unmap_latency_ticks": 0, 00:16:11.875 "max_unmap_latency_ticks": 0, 00:16:11.875 "min_unmap_latency_ticks": 0, 00:16:11.875 "copy_latency_ticks": 0, 00:16:11.875 "max_copy_latency_ticks": 0, 00:16:11.875 "min_copy_latency_ticks": 0 00:16:11.875 } 00:16:11.875 ] 00:16:11.875 }' 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=108032 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=108032 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=108032 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=216064 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.875 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:12.137 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.137 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:16:12.137 "tick_rate": 2290000000, 00:16:12.137 "ticks": 1702840168354, 00:16:12.137 "bdevs": [ 00:16:12.137 { 00:16:12.137 "name": "Malloc_STAT", 00:16:12.137 "bytes_read": 899715584, 00:16:12.137 "num_read_ops": 219651, 00:16:12.137 "bytes_written": 0, 00:16:12.137 "num_write_ops": 0, 00:16:12.137 "bytes_unmapped": 0, 00:16:12.137 "num_unmap_ops": 0, 00:16:12.137 "bytes_copied": 0, 00:16:12.137 "num_copy_ops": 0, 00:16:12.137 "read_latency_ticks": 2304747468320, 00:16:12.137 "max_read_latency_ticks": 13861632, 00:16:12.137 "min_read_latency_ticks": 268020, 00:16:12.137 "write_latency_ticks": 0, 00:16:12.137 "max_write_latency_ticks": 0, 00:16:12.137 "min_write_latency_ticks": 0, 00:16:12.137 "unmap_latency_ticks": 0, 00:16:12.138 "max_unmap_latency_ticks": 0, 00:16:12.138 "min_unmap_latency_ticks": 0, 00:16:12.138 "copy_latency_ticks": 0, 00:16:12.138 "max_copy_latency_ticks": 0, 00:16:12.138 "min_copy_latency_ticks": 0, 00:16:12.138 "io_error": {} 00:16:12.138 } 00:16:12.138 ] 00:16:12.138 }' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=219651 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 216064 -lt 212995 ']' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 216064 -gt 219651 ']' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:12.138 111744.00 IOPS, 436.50 MiB/s 00:16:12.138 Latency(us) 00:16:12.138 [2024-11-20T07:26:16.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.138 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:12.138 Malloc_STAT : 2.01 55715.42 217.64 0.00 0.00 4584.15 2361.01 6610.84 00:16:12.138 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:12.138 Malloc_STAT : 2.01 55748.66 217.77 0.00 0.00 4581.47 765.54 5952.61 00:16:12.138 [2024-11-20T07:26:16.071Z] =================================================================================================================== 00:16:12.138 [2024-11-20T07:26:16.071Z] Total : 111464.08 435.41 0.00 0.00 4582.81 765.54 6610.84 00:16:12.138 { 00:16:12.138 "results": [ 00:16:12.138 { 00:16:12.138 "job": "Malloc_STAT", 00:16:12.138 "core_mask": "0x1", 00:16:12.138 "workload": "randread", 00:16:12.138 "status": "finished", 00:16:12.138 "queue_depth": 256, 00:16:12.138 "io_size": 4096, 00:16:12.138 "runtime": 2.007918, 00:16:12.138 "iops": 55715.422641761266, 00:16:12.138 "mibps": 217.63836969437995, 00:16:12.138 "io_failed": 0, 00:16:12.138 "io_timeout": 0, 00:16:12.138 "avg_latency_us": 4584.151937085927, 00:16:12.138 "min_latency_us": 2361.0131004366813, 00:16:12.138 "max_latency_us": 6610.836681222708 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "job": "Malloc_STAT", 00:16:12.138 "core_mask": "0x2", 00:16:12.138 "workload": "randread", 00:16:12.138 "status": "finished", 00:16:12.138 "queue_depth": 256, 00:16:12.138 "io_size": 4096, 00:16:12.138 "runtime": 2.006721, 00:16:12.138 "iops": 55748.65663936342, 00:16:12.138 "mibps": 217.76818999751336, 00:16:12.138 "io_failed": 0, 00:16:12.138 "io_timeout": 0, 00:16:12.138 "avg_latency_us": 4581.468935676957, 00:16:12.138 "min_latency_us": 765.5406113537118, 00:16:12.138 "max_latency_us": 5952.6148471615725 00:16:12.138 } 00:16:12.138 ], 00:16:12.138 "core_count": 2 00:16:12.138 } 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 73024 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' -z 73024 ']' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # kill -0 73024 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # uname 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73024 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.138 killing process with pid 73024 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73024' 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # kill 73024 00:16:12.138 Received shutdown signal, test time was about 2.180348 seconds 00:16:12.138 00:16:12.138 Latency(us) 00:16:12.138 [2024-11-20T07:26:16.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.138 [2024-11-20T07:26:16.071Z] =================================================================================================================== 00:16:12.138 [2024-11-20T07:26:16.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.138 07:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@978 -- # wait 73024 00:16:14.049 07:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:16:14.049 00:16:14.049 real 0m4.865s 00:16:14.049 user 0m9.072s 00:16:14.049 sys 0m0.444s 00:16:14.049 07:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.049 07:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:14.049 ************************************ 00:16:14.049 END TEST bdev_stat 00:16:14.049 ************************************ 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:16:14.049 07:26:17 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:16:14.049 00:16:14.049 real 2m29.807s 00:16:14.049 user 6m14.738s 00:16:14.049 sys 0m21.436s 00:16:14.049 07:26:17 blockdev_general -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.049 07:26:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:16:14.049 ************************************ 00:16:14.049 END TEST blockdev_general 00:16:14.049 ************************************ 00:16:14.049 07:26:17 -- spdk/autotest.sh@181 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:16:14.049 07:26:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:14.049 07:26:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.049 07:26:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.049 ************************************ 00:16:14.049 START TEST bdevperf_config 00:16:14.049 ************************************ 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:16:14.049 * Looking for test storage... 00:16:14.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1693 -- # lcov --version 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@344 -- # case "$op" in 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@345 -- # : 1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@365 -- # decimal 1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@353 -- # local d=1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@355 -- # echo 1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@366 -- # decimal 2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@353 -- # local d=2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@355 -- # echo 2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.049 07:26:17 bdevperf_config -- scripts/common.sh@368 -- # return 0 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.049 --rc genhtml_branch_coverage=1 00:16:14.049 --rc genhtml_function_coverage=1 00:16:14.049 --rc genhtml_legend=1 00:16:14.049 --rc geninfo_all_blocks=1 00:16:14.049 --rc geninfo_unexecuted_blocks=1 00:16:14.049 00:16:14.049 ' 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.049 --rc genhtml_branch_coverage=1 00:16:14.049 --rc genhtml_function_coverage=1 00:16:14.049 --rc genhtml_legend=1 00:16:14.049 --rc geninfo_all_blocks=1 00:16:14.049 --rc geninfo_unexecuted_blocks=1 00:16:14.049 00:16:14.049 ' 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.049 --rc genhtml_branch_coverage=1 00:16:14.049 --rc genhtml_function_coverage=1 00:16:14.049 --rc genhtml_legend=1 00:16:14.049 --rc geninfo_all_blocks=1 00:16:14.049 --rc geninfo_unexecuted_blocks=1 00:16:14.049 00:16:14.049 ' 00:16:14.049 07:26:17 bdevperf_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.050 --rc genhtml_branch_coverage=1 00:16:14.050 --rc genhtml_function_coverage=1 00:16:14.050 --rc genhtml_legend=1 00:16:14.050 --rc geninfo_all_blocks=1 00:16:14.050 --rc geninfo_unexecuted_blocks=1 00:16:14.050 00:16:14.050 ' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:14.050 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:14.050 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:14.050 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:14.050 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:14.050 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:14.050 07:26:17 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:19.341 07:26:22 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-20 07:26:17.895902] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:19.341 [2024-11-20 07:26:17.896014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:16:19.341 Using job config with 4 jobs 00:16:19.341 [2024-11-20 07:26:18.072556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.341 [2024-11-20 07:26:18.200853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.341 cpumask for '\''job0'\'' is too big 00:16:19.341 cpumask for '\''job1'\'' is too big 00:16:19.341 cpumask for '\''job2'\'' is too big 00:16:19.341 cpumask for '\''job3'\'' is too big 00:16:19.341 Running I/O for 2 seconds... 00:16:19.341 133120.00 IOPS, 130.00 MiB/s [2024-11-20T07:26:23.274Z] 132608.00 IOPS, 129.50 MiB/s 00:16:19.341 Latency(us) 00:16:19.341 [2024-11-20T07:26:23.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.341 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.341 Malloc0 : 2.01 33097.76 32.32 0.00 0.00 7728.06 1473.84 12821.02 00:16:19.341 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.341 Malloc0 : 2.02 33043.06 32.27 0.00 0.00 7728.17 1566.85 11504.57 00:16:19.341 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.341 Malloc0 : 2.02 33022.95 32.25 0.00 0.00 7719.41 1395.14 11561.81 00:16:19.341 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33002.20 32.23 0.00 0.00 7710.26 1402.30 12878.25 00:16:19.342 [2024-11-20T07:26:23.275Z] =================================================================================================================== 00:16:19.342 [2024-11-20T07:26:23.275Z] Total : 132165.98 129.07 0.00 0.00 7721.47 1395.14 12878.25' 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-20 07:26:17.895902] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:19.342 [2024-11-20 07:26:17.896014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:16:19.342 Using job config with 4 jobs 00:16:19.342 [2024-11-20 07:26:18.072556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.342 [2024-11-20 07:26:18.200853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.342 cpumask for '\''job0'\'' is too big 00:16:19.342 cpumask for '\''job1'\'' is too big 00:16:19.342 cpumask for '\''job2'\'' is too big 00:16:19.342 cpumask for '\''job3'\'' is too big 00:16:19.342 Running I/O for 2 seconds... 00:16:19.342 133120.00 IOPS, 130.00 MiB/s [2024-11-20T07:26:23.275Z] 132608.00 IOPS, 129.50 MiB/s 00:16:19.342 Latency(us) 00:16:19.342 [2024-11-20T07:26:23.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.01 33097.76 32.32 0.00 0.00 7728.06 1473.84 12821.02 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33043.06 32.27 0.00 0.00 7728.17 1566.85 11504.57 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33022.95 32.25 0.00 0.00 7719.41 1395.14 11561.81 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33002.20 32.23 0.00 0.00 7710.26 1402.30 12878.25 00:16:19.342 [2024-11-20T07:26:23.275Z] =================================================================================================================== 00:16:19.342 [2024-11-20T07:26:23.275Z] Total : 132165.98 129.07 0.00 0.00 7721.47 1395.14 12878.25' 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 07:26:17.895902] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:19.342 [2024-11-20 07:26:17.896014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:16:19.342 Using job config with 4 jobs 00:16:19.342 [2024-11-20 07:26:18.072556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.342 [2024-11-20 07:26:18.200853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.342 cpumask for '\''job0'\'' is too big 00:16:19.342 cpumask for '\''job1'\'' is too big 00:16:19.342 cpumask for '\''job2'\'' is too big 00:16:19.342 cpumask for '\''job3'\'' is too big 00:16:19.342 Running I/O for 2 seconds... 00:16:19.342 133120.00 IOPS, 130.00 MiB/s [2024-11-20T07:26:23.275Z] 132608.00 IOPS, 129.50 MiB/s 00:16:19.342 Latency(us) 00:16:19.342 [2024-11-20T07:26:23.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.01 33097.76 32.32 0.00 0.00 7728.06 1473.84 12821.02 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33043.06 32.27 0.00 0.00 7728.17 1566.85 11504.57 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33022.95 32.25 0.00 0.00 7719.41 1395.14 11561.81 00:16:19.342 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:19.342 Malloc0 : 2.02 33002.20 32.23 0.00 0.00 7710.26 1402.30 12878.25 00:16:19.342 [2024-11-20T07:26:23.275Z] =================================================================================================================== 00:16:19.342 [2024-11-20T07:26:23.275Z] Total : 132165.98 129.07 0.00 0.00 7721.47 1395.14 12878.25' 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:16:19.342 07:26:22 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:19.342 [2024-11-20 07:26:22.354290] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:19.342 [2024-11-20 07:26:22.354402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73234 ] 00:16:19.342 [2024-11-20 07:26:22.531967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.342 [2024-11-20 07:26:22.663883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.342 cpumask for 'job0' is too big 00:16:19.342 cpumask for 'job1' is too big 00:16:19.342 cpumask for 'job2' is too big 00:16:19.342 cpumask for 'job3' is too big 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:16:23.537 Running I/O for 2 seconds... 00:16:23.537 123904.00 IOPS, 121.00 MiB/s [2024-11-20T07:26:27.470Z] 125952.00 IOPS, 123.00 MiB/s 00:16:23.537 Latency(us) 00:16:23.537 [2024-11-20T07:26:27.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.537 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:23.537 Malloc0 : 2.01 31402.60 30.67 0.00 0.00 8144.74 1595.47 14480.88 00:16:23.537 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:23.537 Malloc0 : 2.02 31360.71 30.63 0.00 0.00 8138.56 1738.56 12706.54 00:16:23.537 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:23.537 Malloc0 : 2.02 31362.99 30.63 0.00 0.00 8122.09 1538.24 12363.12 00:16:23.537 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:23.537 Malloc0 : 2.03 31342.41 30.61 0.00 0.00 8111.99 1638.40 13736.80 00:16:23.537 [2024-11-20T07:26:27.470Z] =================================================================================================================== 00:16:23.537 [2024-11-20T07:26:27.470Z] Total : 125468.71 122.53 0.00 0.00 8129.32 1538.24 14480.88' 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:23.537 07:26:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:23.538 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:23.538 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:23.538 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:23.538 07:26:26 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-20 07:26:26.858877] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:27.731 [2024-11-20 07:26:26.858992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:16:27.731 Using job config with 3 jobs 00:16:27.731 [2024-11-20 07:26:27.034746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.731 [2024-11-20 07:26:27.168551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.731 cpumask for '\''job0'\'' is too big 00:16:27.731 cpumask for '\''job1'\'' is too big 00:16:27.731 cpumask for '\''job2'\'' is too big 00:16:27.731 Running I/O for 2 seconds... 00:16:27.731 125952.00 IOPS, 123.00 MiB/s [2024-11-20T07:26:31.664Z] 127104.00 IOPS, 124.12 MiB/s 00:16:27.731 Latency(us) 00:16:27.731 [2024-11-20T07:26:31.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42269.71 41.28 0.00 0.00 6050.33 1423.76 9730.24 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42215.50 41.23 0.00 0.00 6048.05 1445.23 9386.82 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.02 42249.61 41.26 0.00 0.00 6032.77 686.84 10932.21 00:16:27.731 [2024-11-20T07:26:31.664Z] =================================================================================================================== 00:16:27.731 [2024-11-20T07:26:31.664Z] Total : 126734.81 123.76 0.00 0.00 6043.70 686.84 10932.21' 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-20 07:26:26.858877] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:27.731 [2024-11-20 07:26:26.858992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:16:27.731 Using job config with 3 jobs 00:16:27.731 [2024-11-20 07:26:27.034746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.731 [2024-11-20 07:26:27.168551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.731 cpumask for '\''job0'\'' is too big 00:16:27.731 cpumask for '\''job1'\'' is too big 00:16:27.731 cpumask for '\''job2'\'' is too big 00:16:27.731 Running I/O for 2 seconds... 00:16:27.731 125952.00 IOPS, 123.00 MiB/s [2024-11-20T07:26:31.664Z] 127104.00 IOPS, 124.12 MiB/s 00:16:27.731 Latency(us) 00:16:27.731 [2024-11-20T07:26:31.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42269.71 41.28 0.00 0.00 6050.33 1423.76 9730.24 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42215.50 41.23 0.00 0.00 6048.05 1445.23 9386.82 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.02 42249.61 41.26 0.00 0.00 6032.77 686.84 10932.21 00:16:27.731 [2024-11-20T07:26:31.664Z] =================================================================================================================== 00:16:27.731 [2024-11-20T07:26:31.664Z] Total : 126734.81 123.76 0.00 0.00 6043.70 686.84 10932.21' 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 07:26:26.858877] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:27.731 [2024-11-20 07:26:26.858992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:16:27.731 Using job config with 3 jobs 00:16:27.731 [2024-11-20 07:26:27.034746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.731 [2024-11-20 07:26:27.168551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.731 cpumask for '\''job0'\'' is too big 00:16:27.731 cpumask for '\''job1'\'' is too big 00:16:27.731 cpumask for '\''job2'\'' is too big 00:16:27.731 Running I/O for 2 seconds... 00:16:27.731 125952.00 IOPS, 123.00 MiB/s [2024-11-20T07:26:31.664Z] 127104.00 IOPS, 124.12 MiB/s 00:16:27.731 Latency(us) 00:16:27.731 [2024-11-20T07:26:31.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42269.71 41.28 0.00 0.00 6050.33 1423.76 9730.24 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.01 42215.50 41.23 0.00 0.00 6048.05 1445.23 9386.82 00:16:27.731 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:27.731 Malloc0 : 2.02 42249.61 41.26 0.00 0.00 6032.77 686.84 10932.21 00:16:27.731 [2024-11-20T07:26:31.664Z] =================================================================================================================== 00:16:27.731 [2024-11-20T07:26:31.664Z] Total : 126734.81 123.76 0.00 0.00 6043.70 686.84 10932.21' 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:16:27.731 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:27.731 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:27.731 07:26:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:27.731 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:27.732 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:16:27.732 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:16:27.732 07:26:31 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:31.936 07:26:35 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-20 07:26:31.373803] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:31.936 [2024-11-20 07:26:31.374537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73351 ] 00:16:31.936 Using job config with 4 jobs 00:16:31.936 [2024-11-20 07:26:31.563881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.936 [2024-11-20 07:26:31.688375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.936 cpumask for '\''job0'\'' is too big 00:16:31.936 cpumask for '\''job1'\'' is too big 00:16:31.936 cpumask for '\''job2'\'' is too big 00:16:31.936 cpumask for '\''job3'\'' is too big 00:16:31.936 Running I/O for 2 seconds... 00:16:31.936 129024.00 IOPS, 126.00 MiB/s [2024-11-20T07:26:35.869Z] 128512.00 IOPS, 125.50 MiB/s 00:16:31.936 Latency(us) 00:16:31.936 [2024-11-20T07:26:35.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc0 : 2.03 15862.66 15.49 0.00 0.00 16129.85 3090.78 25527.56 00:16:31.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc1 : 2.04 15849.80 15.48 0.00 0.00 16127.15 3648.84 25527.56 00:16:31.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc0 : 2.04 15826.62 15.46 0.00 0.00 16104.02 3047.85 22322.31 00:16:31.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc1 : 2.04 15814.00 15.44 0.00 0.00 16105.97 3591.60 22322.31 00:16:31.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc0 : 2.04 15803.03 15.43 0.00 0.00 16071.98 2990.62 19346.00 00:16:31.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc1 : 2.04 15791.51 15.42 0.00 0.00 16072.43 3605.91 19346.00 00:16:31.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.936 Malloc0 : 2.04 15780.52 15.41 0.00 0.00 16039.33 3019.23 18430.21 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.05 15769.08 15.40 0.00 0.00 16038.42 3591.60 18659.16 00:16:31.937 [2024-11-20T07:26:35.870Z] =================================================================================================================== 00:16:31.937 [2024-11-20T07:26:35.870Z] Total : 126497.21 123.53 0.00 0.00 16086.14 2990.62 25527.56' 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-20 07:26:31.373803] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:31.937 [2024-11-20 07:26:31.374537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73351 ] 00:16:31.937 Using job config with 4 jobs 00:16:31.937 [2024-11-20 07:26:31.563881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.937 [2024-11-20 07:26:31.688375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.937 cpumask for '\''job0'\'' is too big 00:16:31.937 cpumask for '\''job1'\'' is too big 00:16:31.937 cpumask for '\''job2'\'' is too big 00:16:31.937 cpumask for '\''job3'\'' is too big 00:16:31.937 Running I/O for 2 seconds... 00:16:31.937 129024.00 IOPS, 126.00 MiB/s [2024-11-20T07:26:35.870Z] 128512.00 IOPS, 125.50 MiB/s 00:16:31.937 Latency(us) 00:16:31.937 [2024-11-20T07:26:35.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.03 15862.66 15.49 0.00 0.00 16129.85 3090.78 25527.56 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15849.80 15.48 0.00 0.00 16127.15 3648.84 25527.56 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15826.62 15.46 0.00 0.00 16104.02 3047.85 22322.31 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15814.00 15.44 0.00 0.00 16105.97 3591.60 22322.31 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15803.03 15.43 0.00 0.00 16071.98 2990.62 19346.00 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15791.51 15.42 0.00 0.00 16072.43 3605.91 19346.00 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15780.52 15.41 0.00 0.00 16039.33 3019.23 18430.21 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.05 15769.08 15.40 0.00 0.00 16038.42 3591.60 18659.16 00:16:31.937 [2024-11-20T07:26:35.870Z] =================================================================================================================== 00:16:31.937 [2024-11-20T07:26:35.870Z] Total : 126497.21 123.53 0.00 0.00 16086.14 2990.62 25527.56' 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 07:26:31.373803] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:31.937 [2024-11-20 07:26:31.374537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73351 ] 00:16:31.937 Using job config with 4 jobs 00:16:31.937 [2024-11-20 07:26:31.563881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.937 [2024-11-20 07:26:31.688375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.937 cpumask for '\''job0'\'' is too big 00:16:31.937 cpumask for '\''job1'\'' is too big 00:16:31.937 cpumask for '\''job2'\'' is too big 00:16:31.937 cpumask for '\''job3'\'' is too big 00:16:31.937 Running I/O for 2 seconds... 00:16:31.937 129024.00 IOPS, 126.00 MiB/s [2024-11-20T07:26:35.870Z] 128512.00 IOPS, 125.50 MiB/s 00:16:31.937 Latency(us) 00:16:31.937 [2024-11-20T07:26:35.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.03 15862.66 15.49 0.00 0.00 16129.85 3090.78 25527.56 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15849.80 15.48 0.00 0.00 16127.15 3648.84 25527.56 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15826.62 15.46 0.00 0.00 16104.02 3047.85 22322.31 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15814.00 15.44 0.00 0.00 16105.97 3591.60 22322.31 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15803.03 15.43 0.00 0.00 16071.98 2990.62 19346.00 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.04 15791.51 15.42 0.00 0.00 16072.43 3605.91 19346.00 00:16:31.937 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc0 : 2.04 15780.52 15.41 0.00 0.00 16039.33 3019.23 18430.21 00:16:31.937 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:31.937 Malloc1 : 2.05 15769.08 15.40 0.00 0.00 16038.42 3591.60 18659.16 00:16:31.937 [2024-11-20T07:26:35.870Z] =================================================================================================================== 00:16:31.937 [2024-11-20T07:26:35.870Z] Total : 126497.21 123.53 0.00 0.00 16086.14 2990.62 25527.56' 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:31.937 07:26:35 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:31.937 00:16:31.937 real 0m18.214s 00:16:31.937 user 0m16.454s 00:16:31.937 sys 0m1.281s 00:16:31.937 07:26:35 bdevperf_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.937 07:26:35 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:16:31.937 ************************************ 00:16:31.937 END TEST bdevperf_config 00:16:31.937 ************************************ 00:16:32.204 07:26:35 -- spdk/autotest.sh@182 -- # uname -s 00:16:32.204 07:26:35 -- spdk/autotest.sh@182 -- # [[ Linux == Linux ]] 00:16:32.204 07:26:35 -- spdk/autotest.sh@183 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:32.204 07:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:32.204 07:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.204 07:26:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.204 ************************************ 00:16:32.204 START TEST reactor_set_interrupt 00:16:32.204 ************************************ 00:16:32.204 07:26:35 reactor_set_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:32.204 * Looking for test storage... 00:16:32.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.204 07:26:36 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.204 --rc genhtml_branch_coverage=1 00:16:32.204 --rc genhtml_function_coverage=1 00:16:32.204 --rc genhtml_legend=1 00:16:32.204 --rc geninfo_all_blocks=1 00:16:32.204 --rc geninfo_unexecuted_blocks=1 00:16:32.204 00:16:32.204 ' 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.204 --rc genhtml_branch_coverage=1 00:16:32.204 --rc genhtml_function_coverage=1 00:16:32.204 --rc genhtml_legend=1 00:16:32.204 --rc geninfo_all_blocks=1 00:16:32.204 --rc geninfo_unexecuted_blocks=1 00:16:32.204 00:16:32.204 ' 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.204 --rc genhtml_branch_coverage=1 00:16:32.204 --rc genhtml_function_coverage=1 00:16:32.204 --rc genhtml_legend=1 00:16:32.204 --rc geninfo_all_blocks=1 00:16:32.204 --rc geninfo_unexecuted_blocks=1 00:16:32.204 00:16:32.204 ' 00:16:32.204 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.204 --rc genhtml_branch_coverage=1 00:16:32.204 --rc genhtml_function_coverage=1 00:16:32.204 --rc genhtml_legend=1 00:16:32.204 --rc geninfo_all_blocks=1 00:16:32.204 --rc geninfo_unexecuted_blocks=1 00:16:32.204 00:16:32.204 ' 00:16:32.204 07:26:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:32.469 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:32.469 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:32.469 07:26:36 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:32.470 07:26:36 reactor_set_interrupt -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:32.470 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:32.470 #define SPDK_CONFIG_H 00:16:32.470 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:32.470 #define SPDK_CONFIG_APPS 1 00:16:32.470 #define SPDK_CONFIG_ARCH native 00:16:32.470 #define SPDK_CONFIG_ASAN 1 00:16:32.470 #undef SPDK_CONFIG_AVAHI 00:16:32.470 #undef SPDK_CONFIG_CET 00:16:32.470 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:32.470 #define SPDK_CONFIG_COVERAGE 1 00:16:32.470 #define SPDK_CONFIG_CROSS_PREFIX 00:16:32.470 #undef SPDK_CONFIG_CRYPTO 00:16:32.470 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:32.470 #undef SPDK_CONFIG_CUSTOMOCF 00:16:32.470 #undef SPDK_CONFIG_DAOS 00:16:32.470 #define SPDK_CONFIG_DAOS_DIR 00:16:32.470 #define SPDK_CONFIG_DEBUG 1 00:16:32.470 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:32.470 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:32.470 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:32.470 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:32.470 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:32.470 #undef SPDK_CONFIG_DPDK_UADK 00:16:32.470 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:32.470 #define SPDK_CONFIG_EXAMPLES 1 00:16:32.470 #undef SPDK_CONFIG_FC 00:16:32.470 #define SPDK_CONFIG_FC_PATH 00:16:32.470 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:32.470 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:32.470 #define SPDK_CONFIG_FSDEV 1 00:16:32.470 #undef SPDK_CONFIG_FUSE 00:16:32.470 #undef SPDK_CONFIG_FUZZER 00:16:32.470 #define SPDK_CONFIG_FUZZER_LIB 00:16:32.470 #undef SPDK_CONFIG_GOLANG 00:16:32.470 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:32.470 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:32.470 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:32.470 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:32.470 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:32.470 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:32.470 #undef SPDK_CONFIG_HAVE_LZ4 00:16:32.470 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:32.470 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:32.470 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:32.470 #define SPDK_CONFIG_IDXD 1 00:16:32.470 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:32.470 #undef SPDK_CONFIG_IPSEC_MB 00:16:32.470 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:32.470 #define SPDK_CONFIG_ISAL 1 00:16:32.470 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:32.470 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:32.470 #define SPDK_CONFIG_LIBDIR 00:16:32.470 #undef SPDK_CONFIG_LTO 00:16:32.470 #define SPDK_CONFIG_MAX_LCORES 128 00:16:32.470 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:32.470 #define SPDK_CONFIG_NVME_CUSE 1 00:16:32.470 #undef SPDK_CONFIG_OCF 00:16:32.470 #define SPDK_CONFIG_OCF_PATH 00:16:32.470 #define SPDK_CONFIG_OPENSSL_PATH 00:16:32.470 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:32.470 #define SPDK_CONFIG_PGO_DIR 00:16:32.470 #undef SPDK_CONFIG_PGO_USE 00:16:32.470 #define SPDK_CONFIG_PREFIX /usr/local 00:16:32.470 #undef SPDK_CONFIG_RAID5F 00:16:32.470 #undef SPDK_CONFIG_RBD 00:16:32.470 #define SPDK_CONFIG_RDMA 1 00:16:32.470 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:32.470 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:32.470 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:32.470 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:32.470 #undef SPDK_CONFIG_SHARED 00:16:32.470 #undef SPDK_CONFIG_SMA 00:16:32.470 #define SPDK_CONFIG_TESTS 1 00:16:32.470 #undef SPDK_CONFIG_TSAN 00:16:32.470 #define SPDK_CONFIG_UBLK 1 00:16:32.470 #define SPDK_CONFIG_UBSAN 1 00:16:32.470 #define SPDK_CONFIG_UNIT_TESTS 1 00:16:32.470 #undef SPDK_CONFIG_URING 00:16:32.470 #define SPDK_CONFIG_URING_PATH 00:16:32.470 #undef SPDK_CONFIG_URING_ZNS 00:16:32.470 #undef SPDK_CONFIG_USDT 00:16:32.470 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:32.470 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:32.470 #undef SPDK_CONFIG_VFIO_USER 00:16:32.470 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:32.470 #define SPDK_CONFIG_VHOST 1 00:16:32.470 #define SPDK_CONFIG_VIRTIO 1 00:16:32.470 #undef SPDK_CONFIG_VTUNE 00:16:32.470 #define SPDK_CONFIG_VTUNE_DIR 00:16:32.470 #define SPDK_CONFIG_WERROR 1 00:16:32.470 #define SPDK_CONFIG_WPDK_DIR 00:16:32.470 #undef SPDK_CONFIG_XNVME 00:16:32.470 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:32.470 07:26:36 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:32.470 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:32.470 07:26:36 reactor_set_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.470 07:26:36 reactor_set_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.470 07:26:36 reactor_set_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.470 07:26:36 reactor_set_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.470 07:26:36 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:32.471 07:26:36 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:32.471 07:26:36 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:32.471 07:26:36 reactor_set_interrupt -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:32.471 07:26:36 reactor_set_interrupt -- paths/export.sh@6 -- # export PATH 00:16:32.471 07:26:36 reactor_set_interrupt -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:32.471 07:26:36 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 1 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 0 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:32.471 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : true 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@175 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@177 -- # : 0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@206 -- # cat 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export QEMU_BIN= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@259 -- # QEMU_BIN= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@269 -- # _LCOV= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@279 -- # export valgrind= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@279 -- # valgrind= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@285 -- # uname -s 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@289 -- # MAKE=make 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@331 -- # [[ -z 73436 ]] 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@331 -- # kill -0 73436 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:32.472 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lURRvh 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.lURRvh/tests/interrupt /tmp/spdk.lURRvh 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@340 -- # df -T 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1249308672 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=9693044736 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=9971707904 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=6265348096 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270111744 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1254006784 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254019072 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=94936453120 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4766326784 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:32.473 * Looking for test storage... 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@385 -- # mount=/ 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@387 -- # target_space=9693044736 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@394 -- # new_size=12186300416 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@402 -- # return 0 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # true 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.473 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.734 07:26:36 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:32.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.734 --rc genhtml_branch_coverage=1 00:16:32.734 --rc genhtml_function_coverage=1 00:16:32.734 --rc genhtml_legend=1 00:16:32.734 --rc geninfo_all_blocks=1 00:16:32.734 --rc geninfo_unexecuted_blocks=1 00:16:32.734 00:16:32.734 ' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:32.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.734 --rc genhtml_branch_coverage=1 00:16:32.734 --rc genhtml_function_coverage=1 00:16:32.734 --rc genhtml_legend=1 00:16:32.734 --rc geninfo_all_blocks=1 00:16:32.734 --rc geninfo_unexecuted_blocks=1 00:16:32.734 00:16:32.734 ' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:32.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.734 --rc genhtml_branch_coverage=1 00:16:32.734 --rc genhtml_function_coverage=1 00:16:32.734 --rc genhtml_legend=1 00:16:32.734 --rc geninfo_all_blocks=1 00:16:32.734 --rc geninfo_unexecuted_blocks=1 00:16:32.734 00:16:32.734 ' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:32.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.734 --rc genhtml_branch_coverage=1 00:16:32.734 --rc genhtml_function_coverage=1 00:16:32.734 --rc genhtml_legend=1 00:16:32.734 --rc geninfo_all_blocks=1 00:16:32.734 --rc geninfo_unexecuted_blocks=1 00:16:32.734 00:16:32.734 ' 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=73503 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.734 07:26:36 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 73503 /var/tmp/spdk.sock 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 73503 ']' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.734 07:26:36 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:16:32.735 [2024-11-20 07:26:36.499724] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:32.735 [2024-11-20 07:26:36.499831] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73503 ] 00:16:32.994 [2024-11-20 07:26:36.675921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.994 [2024-11-20 07:26:36.789275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.994 [2024-11-20 07:26:36.789426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.994 [2024-11-20 07:26:36.789469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.254 [2024-11-20 07:26:37.084777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:33.514 07:26:37 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.514 07:26:37 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0 00:16:33.514 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:16:33.514 07:26:37 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.773 Malloc0 00:16:33.773 Malloc1 00:16:33.773 Malloc2 00:16:33.773 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:16:33.773 07:26:37 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s 00:16:33.773 07:26:37 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:33.773 07:26:37 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:33.773 5000+0 records in 00:16:33.773 5000+0 records out 00:16:33.773 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0236283 s, 433 MB/s 00:16:33.773 07:26:37 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:16:34.033 AIO0 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 73503 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 73503 without_thd 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=73503 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:34.033 07:26:37 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:34.293 07:26:38 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo '' 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:34.553 spdk_thread ids are 1 on reactor0. 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73503 0 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73503 0 idle 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:34.553 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73503 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.68 reactor_0' 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73503 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.68 reactor_0 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73503 1 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73503 1 idle 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:34.820 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73508 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.00 reactor_1' 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73508 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.00 reactor_1 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73503 2 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73503 2 idle 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:35.080 07:26:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:35.339 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73509 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.00 reactor_2' 00:16:35.339 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73509 root 20 0 20.1t 148992 33408 S 0.0 1.2 0:00.00 reactor_2 00:16:35.339 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:16:35.340 [2024-11-20 07:26:39.210572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:35.340 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:35.599 [2024-11-20 07:26:39.410476] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:35.599 [2024-11-20 07:26:39.411319] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:35.599 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:35.858 [2024-11-20 07:26:39.606354] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:35.858 [2024-11-20 07:26:39.607063] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 73503 0 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 73503 0 busy 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:35.858 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73503 root 20 0 20.1t 152448 33408 R 90.9 1.2 0:01.12 reactor_0' 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73503 root 20 0 20.1t 152448 33408 R 90.9 1.2 0:01.12 reactor_0 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=90.9 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=90 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 73503 2 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 73503 2 busy 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:36.118 07:26:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73509 root 20 0 20.1t 152448 33408 R 99.9 1.2 0:00.46 reactor_2' 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73509 root 20 0 20.1t 152448 33408 R 99.9 1.2 0:00.46 reactor_2 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:36.378 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:36.378 [2024-11-20 07:26:40.282366] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:36.378 [2024-11-20 07:26:40.282808] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 73503 2 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73503 2 idle 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73509 root 20 0 20.1t 152448 33408 S 0.0 1.2 0:00.66 reactor_2' 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73509 root 20 0 20.1t 152448 33408 S 0.0 1.2 0:00.66 reactor_2 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:36.638 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:36.898 [2024-11-20 07:26:40.730282] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:36.898 [2024-11-20 07:26:40.730752] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:36.898 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:16:36.898 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:16:36.898 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:16:37.158 [2024-11-20 07:26:40.922980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 73503 0 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73503 0 idle 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73503 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:37.158 07:26:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73503 -w 256 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73503 root 20 0 20.1t 152576 33408 S 10.0 1.2 0:02.01 reactor_0' 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73503 root 20 0 20.1t 152576 33408 S 10.0 1.2 0:02.01 reactor_0 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=10.0 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=10 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:16:37.418 07:26:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 73503 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 73503 ']' 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 73503 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73503 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.418 killing process with pid 73503 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73503' 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 73503 00:16:37.418 07:26:41 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 73503 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=73647 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:39.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.346 07:26:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 73647 /var/tmp/spdk.sock 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 73647 ']' 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.346 07:26:42 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:16:39.346 [2024-11-20 07:26:42.815336] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:39.346 [2024-11-20 07:26:42.815469] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73647 ] 00:16:39.346 [2024-11-20 07:26:42.992725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.346 [2024-11-20 07:26:43.111274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.346 [2024-11-20 07:26:43.111433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.346 [2024-11-20 07:26:43.111474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.606 [2024-11-20 07:26:43.400224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:39.865 07:26:43 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.865 07:26:43 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0 00:16:39.865 07:26:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:16:39.865 07:26:43 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.124 Malloc0 00:16:40.124 Malloc1 00:16:40.124 Malloc2 00:16:40.124 07:26:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:16:40.124 07:26:44 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s 00:16:40.124 07:26:44 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:40.124 07:26:44 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:40.124 5000+0 records in 00:16:40.124 5000+0 records out 00:16:40.124 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0228012 s, 449 MB/s 00:16:40.124 07:26:44 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:16:40.383 AIO0 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 73647 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 73647 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=73647 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:40.383 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:40.643 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo '' 00:16:40.903 spdk_thread ids are 1 on reactor0. 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73647 0 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73647 0 idle 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:40.903 07:26:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73647 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.67 reactor_0' 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73647 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.67 reactor_0 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73647 1 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73647 1 idle 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:41.163 07:26:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73650 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.00 reactor_1' 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73650 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.00 reactor_1 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 73647 2 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73647 2 idle 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:41.423 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73651 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.00 reactor_2' 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73651 root 20 0 20.1t 148864 33152 S 0.0 1.2 0:00.00 reactor_2 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:41.683 [2024-11-20 07:26:45.580336] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:41.683 [2024-11-20 07:26:45.580979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:16:41.683 [2024-11-20 07:26:45.581388] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:41.683 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:41.943 [2024-11-20 07:26:45.763818] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:41.943 [2024-11-20 07:26:45.764636] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 73647 0 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 73647 0 busy 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:41.943 07:26:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73647 root 20 0 20.1t 152064 33152 R 99.9 1.2 0:01.11 reactor_0' 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73647 root 20 0 20.1t 152064 33152 R 99.9 1.2 0:01.11 reactor_0 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 73647 2 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 73647 2 busy 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:42.203 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73651 root 20 0 20.1t 152064 33152 R 99.9 1.2 0:00.47 reactor_2' 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73651 root 20 0 20.1t 152064 33152 R 99.9 1.2 0:00.47 reactor_2 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:42.463 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:42.722 [2024-11-20 07:26:46.450732] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:42.722 [2024-11-20 07:26:46.451142] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 73647 2 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73647 2 idle 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2 00:16:42.722 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73651 root 20 0 20.1t 152064 33152 S 0.0 1.2 0:00.68 reactor_2' 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73651 root 20 0 20.1t 152064 33152 S 0.0 1.2 0:00.68 reactor_2 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:42.981 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:43.240 [2024-11-20 07:26:46.909881] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:43.241 [2024-11-20 07:26:46.910337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:16:43.241 [2024-11-20 07:26:46.910394] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 73647 0 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 73647 0 idle 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=73647 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 73647 -w 256 00:16:43.241 07:26:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:16:43.241 07:26:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 73647 root 20 0 20.1t 152192 33152 S 0.0 1.2 0:02.01 reactor_0' 00:16:43.241 07:26:47 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:16:43.241 07:26:47 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 73647 root 20 0 20.1t 152192 33152 S 0.0 1.2 0:02.01 reactor_0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:43.501 07:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 73647 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 73647 ']' 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 73647 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73647 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.501 killing process with pid 73647 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73647' 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 73647 00:16:43.501 07:26:47 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 73647 00:16:44.883 07:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:16:44.883 07:26:48 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:16:44.883 ************************************ 00:16:44.883 END TEST reactor_set_interrupt 00:16:44.883 ************************************ 00:16:44.883 00:16:44.883 real 0m12.859s 00:16:44.883 user 0m12.184s 00:16:44.883 sys 0m2.221s 00:16:44.883 07:26:48 reactor_set_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.883 07:26:48 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:16:44.883 07:26:48 -- spdk/autotest.sh@184 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:44.883 07:26:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.883 07:26:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.883 07:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:45.143 ************************************ 00:16:45.143 START TEST reap_unregistered_poller 00:16:45.143 ************************************ 00:16:45.143 07:26:48 reap_unregistered_poller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:45.143 * Looking for test storage... 00:16:45.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.143 07:26:48 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.143 07:26:48 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.144 07:26:48 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.144 07:26:48 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1 00:16:45.144 07:26:48 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.144 07:26:49 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:45.144 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:45.144 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:45.144 07:26:49 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:45.145 07:26:49 reap_unregistered_poller -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:45.145 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:45.145 07:26:49 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:45.145 07:26:49 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:45.408 07:26:49 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:45.408 #define SPDK_CONFIG_H 00:16:45.408 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:45.408 #define SPDK_CONFIG_APPS 1 00:16:45.408 #define SPDK_CONFIG_ARCH native 00:16:45.408 #define SPDK_CONFIG_ASAN 1 00:16:45.408 #undef SPDK_CONFIG_AVAHI 00:16:45.408 #undef SPDK_CONFIG_CET 00:16:45.408 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:45.408 #define SPDK_CONFIG_COVERAGE 1 00:16:45.408 #define SPDK_CONFIG_CROSS_PREFIX 00:16:45.408 #undef SPDK_CONFIG_CRYPTO 00:16:45.408 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:45.408 #undef SPDK_CONFIG_CUSTOMOCF 00:16:45.408 #undef SPDK_CONFIG_DAOS 00:16:45.408 #define SPDK_CONFIG_DAOS_DIR 00:16:45.408 #define SPDK_CONFIG_DEBUG 1 00:16:45.408 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:45.408 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:45.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:45.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:45.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:45.408 #undef SPDK_CONFIG_DPDK_UADK 00:16:45.408 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:45.408 #define SPDK_CONFIG_EXAMPLES 1 00:16:45.408 #undef SPDK_CONFIG_FC 00:16:45.408 #define SPDK_CONFIG_FC_PATH 00:16:45.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:45.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:45.408 #define SPDK_CONFIG_FSDEV 1 00:16:45.408 #undef SPDK_CONFIG_FUSE 00:16:45.408 #undef SPDK_CONFIG_FUZZER 00:16:45.408 #define SPDK_CONFIG_FUZZER_LIB 00:16:45.408 #undef SPDK_CONFIG_GOLANG 00:16:45.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:45.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:45.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:45.408 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:45.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:45.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:45.408 #undef SPDK_CONFIG_HAVE_LZ4 00:16:45.408 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:45.408 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:45.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:45.408 #define SPDK_CONFIG_IDXD 1 00:16:45.408 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:45.409 #undef SPDK_CONFIG_IPSEC_MB 00:16:45.409 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:45.409 #define SPDK_CONFIG_ISAL 1 00:16:45.409 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:45.409 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:45.409 #define SPDK_CONFIG_LIBDIR 00:16:45.409 #undef SPDK_CONFIG_LTO 00:16:45.409 #define SPDK_CONFIG_MAX_LCORES 128 00:16:45.409 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:45.409 #define SPDK_CONFIG_NVME_CUSE 1 00:16:45.409 #undef SPDK_CONFIG_OCF 00:16:45.409 #define SPDK_CONFIG_OCF_PATH 00:16:45.409 #define SPDK_CONFIG_OPENSSL_PATH 00:16:45.409 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:45.409 #define SPDK_CONFIG_PGO_DIR 00:16:45.409 #undef SPDK_CONFIG_PGO_USE 00:16:45.409 #define SPDK_CONFIG_PREFIX /usr/local 00:16:45.409 #undef SPDK_CONFIG_RAID5F 00:16:45.409 #undef SPDK_CONFIG_RBD 00:16:45.409 #define SPDK_CONFIG_RDMA 1 00:16:45.409 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:45.409 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:45.409 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:45.409 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:45.409 #undef SPDK_CONFIG_SHARED 00:16:45.409 #undef SPDK_CONFIG_SMA 00:16:45.409 #define SPDK_CONFIG_TESTS 1 00:16:45.409 #undef SPDK_CONFIG_TSAN 00:16:45.409 #define SPDK_CONFIG_UBLK 1 00:16:45.409 #define SPDK_CONFIG_UBSAN 1 00:16:45.409 #define SPDK_CONFIG_UNIT_TESTS 1 00:16:45.409 #undef SPDK_CONFIG_URING 00:16:45.409 #define SPDK_CONFIG_URING_PATH 00:16:45.409 #undef SPDK_CONFIG_URING_ZNS 00:16:45.409 #undef SPDK_CONFIG_USDT 00:16:45.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:45.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:45.409 #undef SPDK_CONFIG_VFIO_USER 00:16:45.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:45.409 #define SPDK_CONFIG_VHOST 1 00:16:45.409 #define SPDK_CONFIG_VIRTIO 1 00:16:45.409 #undef SPDK_CONFIG_VTUNE 00:16:45.409 #define SPDK_CONFIG_VTUNE_DIR 00:16:45.409 #define SPDK_CONFIG_WERROR 1 00:16:45.409 #define SPDK_CONFIG_WPDK_DIR 00:16:45.409 #undef SPDK_CONFIG_XNVME 00:16:45.409 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.409 07:26:49 reap_unregistered_poller -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.409 07:26:49 reap_unregistered_poller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@6 -- # export PATH 00:16:45.409 07:26:49 reap_unregistered_poller -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:16:45.409 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 1 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : true 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@175 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@177 -- # : 0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@206 -- # cat 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export QEMU_BIN= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@259 -- # QEMU_BIN= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@269 -- # _LCOV= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@279 -- # export valgrind= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@279 -- # valgrind= 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@285 -- # uname -s 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:45.410 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@289 -- # MAKE=make 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@331 -- # [[ -z 73824 ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@331 -- # kill -0 73824 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.WdQJlW 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.WdQJlW/tests/interrupt /tmp/spdk.WdQJlW 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@340 -- # df -T 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1249308672 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=9693003776 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=9971748864 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=6265348096 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270111744 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1254006784 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254019072 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=94936072192 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4766707712 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:45.411 * Looking for test storage... 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@385 -- # mount=/ 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@387 -- # target_space=9693003776 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@394 -- # new_size=12186341376 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@402 -- # return 0 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # true 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.411 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2 00:16:45.411 07:26:49 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.412 07:26:49 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.412 07:26:49 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.412 07:26:49 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0 00:16:45.412 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.412 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.412 --rc genhtml_branch_coverage=1 00:16:45.412 --rc genhtml_function_coverage=1 00:16:45.412 --rc genhtml_legend=1 00:16:45.412 --rc geninfo_all_blocks=1 00:16:45.412 --rc geninfo_unexecuted_blocks=1 00:16:45.412 00:16:45.412 ' 00:16:45.412 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.412 --rc genhtml_branch_coverage=1 00:16:45.412 --rc genhtml_function_coverage=1 00:16:45.412 --rc genhtml_legend=1 00:16:45.412 --rc geninfo_all_blocks=1 00:16:45.412 --rc geninfo_unexecuted_blocks=1 00:16:45.412 00:16:45.412 ' 00:16:45.412 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.412 --rc genhtml_branch_coverage=1 00:16:45.412 --rc genhtml_function_coverage=1 00:16:45.412 --rc genhtml_legend=1 00:16:45.412 --rc geninfo_all_blocks=1 00:16:45.412 --rc geninfo_unexecuted_blocks=1 00:16:45.412 00:16:45.412 ' 00:16:45.412 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.412 --rc genhtml_branch_coverage=1 00:16:45.412 --rc genhtml_function_coverage=1 00:16:45.412 --rc genhtml_legend=1 00:16:45.412 --rc geninfo_all_blocks=1 00:16:45.412 --rc geninfo_unexecuted_blocks=1 00:16:45.412 00:16:45.412 ' 00:16:45.412 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:16:45.412 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.412 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=73881 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.674 07:26:49 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 73881 /var/tmp/spdk.sock 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@835 -- # '[' -z 73881 ']' 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.674 07:26:49 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:16:45.674 [2024-11-20 07:26:49.388926] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:45.674 [2024-11-20 07:26:49.389075] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73881 ] 00:16:45.674 [2024-11-20 07:26:49.563623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.932 [2024-11-20 07:26:49.685076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.933 [2024-11-20 07:26:49.685174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.933 [2024-11-20 07:26:49.685224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.192 [2024-11-20 07:26:49.992596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:46.451 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.451 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@868 -- # return 0 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:16:46.451 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:16:46.451 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:16:46.451 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:16:46.451 "name": "app_thread", 00:16:46.451 "id": 1, 00:16:46.451 "active_pollers": [], 00:16:46.451 "timed_pollers": [ 00:16:46.451 { 00:16:46.451 "name": "rpc_subsystem_poll_servers", 00:16:46.451 "id": 1, 00:16:46.451 "state": "waiting", 00:16:46.451 "run_count": 0, 00:16:46.451 "busy_count": 0, 00:16:46.451 "period_ticks": 9160000 00:16:46.451 } 00:16:46.451 ], 00:16:46.451 "paused_pollers": [] 00:16:46.451 }' 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:16:46.451 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/common.sh@77 -- # uname -s 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:46.452 5000+0 records in 00:16:46.452 5000+0 records out 00:16:46.452 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0253045 s, 405 MB/s 00:16:46.452 07:26:50 reap_unregistered_poller -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:16:46.712 AIO0 00:16:46.712 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:46.972 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:16:47.231 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:16:47.231 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:16:47.231 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.231 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:16:47.231 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.231 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:16:47.231 "name": "app_thread", 00:16:47.231 "id": 1, 00:16:47.231 "active_pollers": [], 00:16:47.231 "timed_pollers": [ 00:16:47.232 { 00:16:47.232 "name": "rpc_subsystem_poll_servers", 00:16:47.232 "id": 1, 00:16:47.232 "state": "waiting", 00:16:47.232 "run_count": 0, 00:16:47.232 "busy_count": 0, 00:16:47.232 "period_ticks": 9160000 00:16:47.232 } 00:16:47.232 ], 00:16:47.232 "paused_pollers": [] 00:16:47.232 }' 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:47.232 07:26:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 73881 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' -z 73881 ']' 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@958 -- # kill -0 73881 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@959 -- # uname 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73881 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.232 killing process with pid 73881 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73881' 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@973 -- # kill 73881 00:16:47.232 07:26:50 reap_unregistered_poller -- common/autotest_common.sh@978 -- # wait 73881 00:16:48.611 07:26:52 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:16:48.611 07:26:52 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:16:48.611 ************************************ 00:16:48.611 END TEST reap_unregistered_poller 00:16:48.611 ************************************ 00:16:48.611 00:16:48.611 real 0m3.384s 00:16:48.611 user 0m2.630s 00:16:48.611 sys 0m0.736s 00:16:48.611 07:26:52 reap_unregistered_poller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.612 07:26:52 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:16:48.612 07:26:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:16:48.612 07:26:52 -- spdk/autotest.sh@194 -- # uname -s 00:16:48.612 07:26:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:48.612 07:26:52 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:16:48.612 07:26:52 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:16:48.612 07:26:52 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:16:48.612 07:26:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:48.612 07:26:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.612 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:16:48.612 ************************************ 00:16:48.612 START TEST spdk_dd 00:16:48.612 ************************************ 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:16:48.612 * Looking for test storage... 00:16:48.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@345 -- # : 1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@368 -- # return 0 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.612 --rc genhtml_branch_coverage=1 00:16:48.612 --rc genhtml_function_coverage=1 00:16:48.612 --rc genhtml_legend=1 00:16:48.612 --rc geninfo_all_blocks=1 00:16:48.612 --rc geninfo_unexecuted_blocks=1 00:16:48.612 00:16:48.612 ' 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.612 --rc genhtml_branch_coverage=1 00:16:48.612 --rc genhtml_function_coverage=1 00:16:48.612 --rc genhtml_legend=1 00:16:48.612 --rc geninfo_all_blocks=1 00:16:48.612 --rc geninfo_unexecuted_blocks=1 00:16:48.612 00:16:48.612 ' 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.612 --rc genhtml_branch_coverage=1 00:16:48.612 --rc genhtml_function_coverage=1 00:16:48.612 --rc genhtml_legend=1 00:16:48.612 --rc geninfo_all_blocks=1 00:16:48.612 --rc geninfo_unexecuted_blocks=1 00:16:48.612 00:16:48.612 ' 00:16:48.612 07:26:52 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.612 --rc genhtml_branch_coverage=1 00:16:48.612 --rc genhtml_function_coverage=1 00:16:48.612 --rc genhtml_legend=1 00:16:48.612 --rc geninfo_all_blocks=1 00:16:48.612 --rc geninfo_unexecuted_blocks=1 00:16:48.612 00:16:48.612 ' 00:16:48.612 07:26:52 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.612 07:26:52 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@6 -- # export PATH 00:16:48.612 07:26:52 spdk_dd -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:48.612 07:26:52 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:49.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:16:49.182 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:50.146 07:26:53 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:16:50.146 07:26:53 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@233 -- # local class 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@235 -- # local progif 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:16:50.146 07:26:53 spdk_dd -- scripts/common.sh@236 -- # class=01 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@18 -- # local i 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@27 -- # return 0 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@328 -- # (( 1 )) 00:16:50.147 07:26:53 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 00:16:50.147 07:26:53 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@139 -- # local lib 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:16:50.147 * spdk_dd linked to liburing 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:50.147 07:26:53 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:50.147 07:26:53 spdk_dd -- dd/common.sh@149 -- # [[ n != y ]] 00:16:50.148 07:26:53 spdk_dd -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:16:50.148 * spdk_dd built with liburing, but no liburing support requested? 00:16:50.148 07:26:53 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:16:50.148 07:26:53 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:16:50.148 07:26:53 spdk_dd -- dd/common.sh@153 -- # return 0 00:16:50.148 07:26:53 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:16:50.148 07:26:53 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:16:50.148 07:26:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.148 07:26:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.148 07:26:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:50.148 ************************************ 00:16:50.148 START TEST spdk_dd_basic_rw 00:16:50.148 ************************************ 00:16:50.148 07:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:16:50.148 * Looking for test storage... 00:16:50.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:50.148 07:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.148 07:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.148 07:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.148 --rc genhtml_branch_coverage=1 00:16:50.148 --rc genhtml_function_coverage=1 00:16:50.148 --rc genhtml_legend=1 00:16:50.148 --rc geninfo_all_blocks=1 00:16:50.148 --rc geninfo_unexecuted_blocks=1 00:16:50.148 00:16:50.148 ' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.148 --rc genhtml_branch_coverage=1 00:16:50.148 --rc genhtml_function_coverage=1 00:16:50.148 --rc genhtml_legend=1 00:16:50.148 --rc geninfo_all_blocks=1 00:16:50.148 --rc geninfo_unexecuted_blocks=1 00:16:50.148 00:16:50.148 ' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.148 --rc genhtml_branch_coverage=1 00:16:50.148 --rc genhtml_function_coverage=1 00:16:50.148 --rc genhtml_legend=1 00:16:50.148 --rc geninfo_all_blocks=1 00:16:50.148 --rc geninfo_unexecuted_blocks=1 00:16:50.148 00:16:50.148 ' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.148 --rc genhtml_branch_coverage=1 00:16:50.148 --rc genhtml_function_coverage=1 00:16:50.148 --rc genhtml_legend=1 00:16:50.148 --rc geninfo_all_blocks=1 00:16:50.148 --rc geninfo_unexecuted_blocks=1 00:16:50.148 00:16:50.148 ' 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # export PATH 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:16:50.148 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:16:50.149 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:16:50.435 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 25 Data Units Written: 3 Host Read Commands: 626 Host Write Commands: 19 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:16:50.435 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 25 Data Units Written: 3 Host Read Commands: 626 Host Write Commands: 19 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 ************************************ 00:16:50.436 START TEST dd_bs_lt_native_bs 00:16:50.436 ************************************ 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.436 { 00:16:50.436 "subsystems": [ 00:16:50.436 { 00:16:50.436 "subsystem": "bdev", 00:16:50.436 "config": [ 00:16:50.436 { 00:16:50.436 "params": { 00:16:50.436 "trtype": "pcie", 00:16:50.436 "traddr": "0000:00:10.0", 00:16:50.436 "name": "Nvme0" 00:16:50.436 }, 00:16:50.436 "method": "bdev_nvme_attach_controller" 00:16:50.436 }, 00:16:50.436 { 00:16:50.436 "method": "bdev_wait_for_examine" 00:16:50.436 } 00:16:50.436 ] 00:16:50.436 } 00:16:50.436 ] 00:16:50.436 } 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:50.436 07:26:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:16:50.696 [2024-11-20 07:26:54.419598] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:50.696 [2024-11-20 07:26:54.419809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74181 ] 00:16:50.696 [2024-11-20 07:26:54.594554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.955 [2024-11-20 07:26:54.719778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.215 [2024-11-20 07:26:55.118977] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:16:51.215 [2024-11-20 07:26:55.119167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:52.152 [2024-11-20 07:26:55.780088] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.152 00:16:52.152 real 0m1.715s 00:16:52.152 user 0m1.376s 00:16:52.152 sys 0m0.265s 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.152 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:16:52.152 ************************************ 00:16:52.152 END TEST dd_bs_lt_native_bs 00:16:52.152 ************************************ 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:16:52.411 ************************************ 00:16:52.411 START TEST dd_rw 00:16:52.411 ************************************ 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:16:52.411 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:52.670 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:16:52.670 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:16:52.670 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:16:52.670 07:26:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:52.670 { 00:16:52.670 "subsystems": [ 00:16:52.670 { 00:16:52.670 "subsystem": "bdev", 00:16:52.670 "config": [ 00:16:52.670 { 00:16:52.670 "params": { 00:16:52.670 "trtype": "pcie", 00:16:52.670 "traddr": "0000:00:10.0", 00:16:52.670 "name": "Nvme0" 00:16:52.670 }, 00:16:52.670 "method": "bdev_nvme_attach_controller" 00:16:52.670 }, 00:16:52.670 { 00:16:52.670 "method": "bdev_wait_for_examine" 00:16:52.670 } 00:16:52.670 ] 00:16:52.670 } 00:16:52.670 ] 00:16:52.670 } 00:16:52.930 [2024-11-20 07:26:56.613772] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:52.930 [2024-11-20 07:26:56.613899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:16:52.930 [2024-11-20 07:26:56.800481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.189 [2024-11-20 07:26:56.923861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.449  [2024-11-20T07:26:58.760Z] Copying: 60/60 [kB] (average 29 MBps) 00:16:54.827 00:16:54.827 07:26:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:16:54.827 07:26:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:16:54.827 07:26:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:16:54.827 07:26:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:54.827 { 00:16:54.827 "subsystems": [ 00:16:54.827 { 00:16:54.827 "subsystem": "bdev", 00:16:54.827 "config": [ 00:16:54.827 { 00:16:54.827 "params": { 00:16:54.827 "trtype": "pcie", 00:16:54.827 "traddr": "0000:00:10.0", 00:16:54.827 "name": "Nvme0" 00:16:54.827 }, 00:16:54.827 "method": "bdev_nvme_attach_controller" 00:16:54.827 }, 00:16:54.827 { 00:16:54.827 "method": "bdev_wait_for_examine" 00:16:54.827 } 00:16:54.827 ] 00:16:54.827 } 00:16:54.827 ] 00:16:54.827 } 00:16:54.827 [2024-11-20 07:26:58.604471] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:54.827 [2024-11-20 07:26:58.604602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74252 ] 00:16:55.085 [2024-11-20 07:26:58.778968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.085 [2024-11-20 07:26:58.894648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.691  [2024-11-20T07:27:00.560Z] Copying: 60/60 [kB] (average 19 MBps) 00:16:56.627 00:16:56.627 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:56.627 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:16:56.627 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:16:56.627 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:16:56.628 07:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:56.628 { 00:16:56.628 "subsystems": [ 00:16:56.628 { 00:16:56.628 "subsystem": "bdev", 00:16:56.628 "config": [ 00:16:56.628 { 00:16:56.628 "params": { 00:16:56.628 "trtype": "pcie", 00:16:56.628 "traddr": "0000:00:10.0", 00:16:56.628 "name": "Nvme0" 00:16:56.628 }, 00:16:56.628 "method": "bdev_nvme_attach_controller" 00:16:56.628 }, 00:16:56.628 { 00:16:56.628 "method": "bdev_wait_for_examine" 00:16:56.628 } 00:16:56.628 ] 00:16:56.628 } 00:16:56.628 ] 00:16:56.628 } 00:16:56.628 [2024-11-20 07:27:00.325911] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:56.628 [2024-11-20 07:27:00.326033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74283 ] 00:16:56.628 [2024-11-20 07:27:00.498055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.888 [2024-11-20 07:27:00.615446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.146  [2024-11-20T07:27:02.465Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:16:58.532 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:16:58.532 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:16:58.792 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:16:58.792 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:16:58.792 07:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 { 00:16:58.792 "subsystems": [ 00:16:58.792 { 00:16:58.792 "subsystem": "bdev", 00:16:58.792 "config": [ 00:16:58.792 { 00:16:58.792 "params": { 00:16:58.792 "trtype": "pcie", 00:16:58.792 "traddr": "0000:00:10.0", 00:16:58.792 "name": "Nvme0" 00:16:58.792 }, 00:16:58.792 "method": "bdev_nvme_attach_controller" 00:16:58.792 }, 00:16:58.792 { 00:16:58.792 "method": "bdev_wait_for_examine" 00:16:58.792 } 00:16:58.792 ] 00:16:58.792 } 00:16:58.792 ] 00:16:58.792 } 00:16:58.792 [2024-11-20 07:27:02.694075] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:16:58.792 [2024-11-20 07:27:02.694207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74313 ] 00:16:59.051 [2024-11-20 07:27:02.867197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.310 [2024-11-20 07:27:02.985461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.569  [2024-11-20T07:27:04.442Z] Copying: 60/60 [kB] (average 58 MBps) 00:17:00.509 00:17:00.509 07:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:17:00.509 07:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:17:00.509 07:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:00.509 07:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:00.509 { 00:17:00.509 "subsystems": [ 00:17:00.509 { 00:17:00.509 "subsystem": "bdev", 00:17:00.509 "config": [ 00:17:00.509 { 00:17:00.509 "params": { 00:17:00.509 "trtype": "pcie", 00:17:00.509 "traddr": "0000:00:10.0", 00:17:00.509 "name": "Nvme0" 00:17:00.509 }, 00:17:00.509 "method": "bdev_nvme_attach_controller" 00:17:00.509 }, 00:17:00.509 { 00:17:00.509 "method": "bdev_wait_for_examine" 00:17:00.509 } 00:17:00.509 ] 00:17:00.509 } 00:17:00.509 ] 00:17:00.509 } 00:17:00.509 [2024-11-20 07:27:04.399088] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:00.509 [2024-11-20 07:27:04.399217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74338 ] 00:17:00.769 [2024-11-20 07:27:04.578721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.029 [2024-11-20 07:27:04.700387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.289  [2024-11-20T07:27:06.604Z] Copying: 60/60 [kB] (average 58 MBps) 00:17:02.671 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:02.671 07:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:02.671 { 00:17:02.671 "subsystems": [ 00:17:02.671 { 00:17:02.671 "subsystem": "bdev", 00:17:02.671 "config": [ 00:17:02.671 { 00:17:02.671 "params": { 00:17:02.671 "trtype": "pcie", 00:17:02.671 "traddr": "0000:00:10.0", 00:17:02.671 "name": "Nvme0" 00:17:02.671 }, 00:17:02.671 "method": "bdev_nvme_attach_controller" 00:17:02.671 }, 00:17:02.671 { 00:17:02.671 "method": "bdev_wait_for_examine" 00:17:02.671 } 00:17:02.671 ] 00:17:02.671 } 00:17:02.671 ] 00:17:02.671 } 00:17:02.671 [2024-11-20 07:27:06.362591] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:02.671 [2024-11-20 07:27:06.362865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74369 ] 00:17:02.671 [2024-11-20 07:27:06.535643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.931 [2024-11-20 07:27:06.653087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.191  [2024-11-20T07:27:08.063Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:04.130 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:17:04.130 07:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:04.699 07:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:17:04.699 07:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:17:04.699 07:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:04.699 07:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:04.699 { 00:17:04.699 "subsystems": [ 00:17:04.699 { 00:17:04.699 "subsystem": "bdev", 00:17:04.699 "config": [ 00:17:04.699 { 00:17:04.699 "params": { 00:17:04.699 "trtype": "pcie", 00:17:04.699 "traddr": "0000:00:10.0", 00:17:04.699 "name": "Nvme0" 00:17:04.699 }, 00:17:04.699 "method": "bdev_nvme_attach_controller" 00:17:04.699 }, 00:17:04.699 { 00:17:04.699 "method": "bdev_wait_for_examine" 00:17:04.699 } 00:17:04.699 ] 00:17:04.699 } 00:17:04.699 ] 00:17:04.699 } 00:17:04.699 [2024-11-20 07:27:08.416140] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:04.699 [2024-11-20 07:27:08.416276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74399 ] 00:17:04.699 [2024-11-20 07:27:08.588241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.959 [2024-11-20 07:27:08.707488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.218  [2024-11-20T07:27:10.533Z] Copying: 56/56 [kB] (average 54 MBps) 00:17:06.600 00:17:06.600 07:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:17:06.600 07:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:17:06.600 07:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:06.600 07:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:06.600 { 00:17:06.600 "subsystems": [ 00:17:06.600 { 00:17:06.600 "subsystem": "bdev", 00:17:06.600 "config": [ 00:17:06.600 { 00:17:06.600 "params": { 00:17:06.600 "trtype": "pcie", 00:17:06.600 "traddr": "0000:00:10.0", 00:17:06.600 "name": "Nvme0" 00:17:06.600 }, 00:17:06.600 "method": "bdev_nvme_attach_controller" 00:17:06.600 }, 00:17:06.600 { 00:17:06.600 "method": "bdev_wait_for_examine" 00:17:06.600 } 00:17:06.600 ] 00:17:06.600 } 00:17:06.600 ] 00:17:06.600 } 00:17:06.600 [2024-11-20 07:27:10.293337] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:06.600 [2024-11-20 07:27:10.293517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74423 ] 00:17:06.600 [2024-11-20 07:27:10.467370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.859 [2024-11-20 07:27:10.602627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.119  [2024-11-20T07:27:12.430Z] Copying: 56/56 [kB] (average 27 MBps) 00:17:08.497 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:08.497 07:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:08.497 { 00:17:08.497 "subsystems": [ 00:17:08.497 { 00:17:08.497 "subsystem": "bdev", 00:17:08.497 "config": [ 00:17:08.497 { 00:17:08.497 "params": { 00:17:08.497 "trtype": "pcie", 00:17:08.497 "traddr": "0000:00:10.0", 00:17:08.497 "name": "Nvme0" 00:17:08.497 }, 00:17:08.497 "method": "bdev_nvme_attach_controller" 00:17:08.497 }, 00:17:08.497 { 00:17:08.497 "method": "bdev_wait_for_examine" 00:17:08.497 } 00:17:08.497 ] 00:17:08.497 } 00:17:08.497 ] 00:17:08.497 } 00:17:08.497 [2024-11-20 07:27:12.228029] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:08.497 [2024-11-20 07:27:12.228286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74449 ] 00:17:08.497 [2024-11-20 07:27:12.412408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.756 [2024-11-20 07:27:12.555105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.324  [2024-11-20T07:27:14.636Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:17:10.703 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:17:10.703 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:10.962 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:17:10.962 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:17:10.962 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:10.962 07:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:10.962 { 00:17:10.962 "subsystems": [ 00:17:10.962 { 00:17:10.962 "subsystem": "bdev", 00:17:10.962 "config": [ 00:17:10.962 { 00:17:10.962 "params": { 00:17:10.962 "trtype": "pcie", 00:17:10.962 "traddr": "0000:00:10.0", 00:17:10.962 "name": "Nvme0" 00:17:10.962 }, 00:17:10.962 "method": "bdev_nvme_attach_controller" 00:17:10.962 }, 00:17:10.962 { 00:17:10.962 "method": "bdev_wait_for_examine" 00:17:10.962 } 00:17:10.962 ] 00:17:10.962 } 00:17:10.962 ] 00:17:10.962 } 00:17:11.220 [2024-11-20 07:27:14.916184] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:11.220 [2024-11-20 07:27:14.916434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74490 ] 00:17:11.220 [2024-11-20 07:27:15.095134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.478 [2024-11-20 07:27:15.227138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.044  [2024-11-20T07:27:16.913Z] Copying: 56/56 [kB] (average 54 MBps) 00:17:12.980 00:17:12.980 07:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:17:12.980 07:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:17:12.980 07:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:12.980 07:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:12.980 { 00:17:12.980 "subsystems": [ 00:17:12.980 { 00:17:12.980 "subsystem": "bdev", 00:17:12.980 "config": [ 00:17:12.980 { 00:17:12.980 "params": { 00:17:12.980 "trtype": "pcie", 00:17:12.980 "traddr": "0000:00:10.0", 00:17:12.980 "name": "Nvme0" 00:17:12.980 }, 00:17:12.980 "method": "bdev_nvme_attach_controller" 00:17:12.980 }, 00:17:12.980 { 00:17:12.980 "method": "bdev_wait_for_examine" 00:17:12.980 } 00:17:12.980 ] 00:17:12.980 } 00:17:12.980 ] 00:17:12.980 } 00:17:12.980 [2024-11-20 07:27:16.820772] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:12.980 [2024-11-20 07:27:16.821018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74519 ] 00:17:13.239 [2024-11-20 07:27:17.016702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.239 [2024-11-20 07:27:17.156368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.805  [2024-11-20T07:27:19.114Z] Copying: 56/56 [kB] (average 54 MBps) 00:17:15.181 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:15.181 07:27:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:15.181 { 00:17:15.181 "subsystems": [ 00:17:15.181 { 00:17:15.181 "subsystem": "bdev", 00:17:15.181 "config": [ 00:17:15.181 { 00:17:15.181 "params": { 00:17:15.181 "trtype": "pcie", 00:17:15.181 "traddr": "0000:00:10.0", 00:17:15.181 "name": "Nvme0" 00:17:15.181 }, 00:17:15.181 "method": "bdev_nvme_attach_controller" 00:17:15.181 }, 00:17:15.181 { 00:17:15.181 "method": "bdev_wait_for_examine" 00:17:15.181 } 00:17:15.181 ] 00:17:15.181 } 00:17:15.181 ] 00:17:15.181 } 00:17:15.181 [2024-11-20 07:27:19.096998] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:15.181 [2024-11-20 07:27:19.097206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74550 ] 00:17:15.445 [2024-11-20 07:27:19.276830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.707 [2024-11-20 07:27:19.412388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.965  [2024-11-20T07:27:21.275Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:17.342 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:17:17.342 07:27:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:17.603 07:27:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:17:17.603 07:27:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:17:17.603 07:27:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:17.603 07:27:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:17.603 { 00:17:17.603 "subsystems": [ 00:17:17.603 { 00:17:17.603 "subsystem": "bdev", 00:17:17.603 "config": [ 00:17:17.603 { 00:17:17.603 "params": { 00:17:17.603 "trtype": "pcie", 00:17:17.603 "traddr": "0000:00:10.0", 00:17:17.603 "name": "Nvme0" 00:17:17.603 }, 00:17:17.603 "method": "bdev_nvme_attach_controller" 00:17:17.603 }, 00:17:17.603 { 00:17:17.603 "method": "bdev_wait_for_examine" 00:17:17.603 } 00:17:17.603 ] 00:17:17.603 } 00:17:17.603 ] 00:17:17.603 } 00:17:17.603 [2024-11-20 07:27:21.330466] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:17.603 [2024-11-20 07:27:21.330597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74580 ] 00:17:17.603 [2024-11-20 07:27:21.489104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.862 [2024-11-20 07:27:21.609439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.122  [2024-11-20T07:27:23.434Z] Copying: 48/48 [kB] (average 46 MBps) 00:17:19.501 00:17:19.501 07:27:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:17:19.501 07:27:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:17:19.501 07:27:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:19.501 07:27:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:19.501 { 00:17:19.501 "subsystems": [ 00:17:19.501 { 00:17:19.501 "subsystem": "bdev", 00:17:19.501 "config": [ 00:17:19.501 { 00:17:19.501 "params": { 00:17:19.501 "trtype": "pcie", 00:17:19.501 "traddr": "0000:00:10.0", 00:17:19.501 "name": "Nvme0" 00:17:19.501 }, 00:17:19.501 "method": "bdev_nvme_attach_controller" 00:17:19.501 }, 00:17:19.501 { 00:17:19.501 "method": "bdev_wait_for_examine" 00:17:19.501 } 00:17:19.501 ] 00:17:19.501 } 00:17:19.501 ] 00:17:19.501 } 00:17:19.501 [2024-11-20 07:27:23.246167] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:19.501 [2024-11-20 07:27:23.246347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74606 ] 00:17:19.501 [2024-11-20 07:27:23.419275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.761 [2024-11-20 07:27:23.539916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.019  [2024-11-20T07:27:24.892Z] Copying: 48/48 [kB] (average 46 MBps) 00:17:20.959 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:20.959 07:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:21.224 { 00:17:21.224 "subsystems": [ 00:17:21.224 { 00:17:21.224 "subsystem": "bdev", 00:17:21.224 "config": [ 00:17:21.224 { 00:17:21.224 "params": { 00:17:21.224 "trtype": "pcie", 00:17:21.224 "traddr": "0000:00:10.0", 00:17:21.224 "name": "Nvme0" 00:17:21.224 }, 00:17:21.224 "method": "bdev_nvme_attach_controller" 00:17:21.224 }, 00:17:21.224 { 00:17:21.224 "method": "bdev_wait_for_examine" 00:17:21.224 } 00:17:21.224 ] 00:17:21.224 } 00:17:21.224 ] 00:17:21.224 } 00:17:21.224 [2024-11-20 07:27:24.928700] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:21.224 [2024-11-20 07:27:24.928949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74637 ] 00:17:21.224 [2024-11-20 07:27:25.100760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.491 [2024-11-20 07:27:25.212239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.750  [2024-11-20T07:27:27.059Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:23.126 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:17:23.126 07:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:23.384 07:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:17:23.384 07:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:17:23.384 07:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:23.384 07:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:23.384 { 00:17:23.384 "subsystems": [ 00:17:23.384 { 00:17:23.384 "subsystem": "bdev", 00:17:23.384 "config": [ 00:17:23.384 { 00:17:23.384 "params": { 00:17:23.384 "trtype": "pcie", 00:17:23.384 "traddr": "0000:00:10.0", 00:17:23.384 "name": "Nvme0" 00:17:23.384 }, 00:17:23.384 "method": "bdev_nvme_attach_controller" 00:17:23.384 }, 00:17:23.384 { 00:17:23.384 "method": "bdev_wait_for_examine" 00:17:23.384 } 00:17:23.384 ] 00:17:23.384 } 00:17:23.384 ] 00:17:23.384 } 00:17:23.384 [2024-11-20 07:27:27.211933] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:23.384 [2024-11-20 07:27:27.212081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74667 ] 00:17:23.643 [2024-11-20 07:27:27.385775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.643 [2024-11-20 07:27:27.509442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.211  [2024-11-20T07:27:29.081Z] Copying: 48/48 [kB] (average 46 MBps) 00:17:25.148 00:17:25.148 07:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:17:25.148 07:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:17:25.148 07:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:25.148 07:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:25.148 { 00:17:25.148 "subsystems": [ 00:17:25.148 { 00:17:25.148 "subsystem": "bdev", 00:17:25.148 "config": [ 00:17:25.148 { 00:17:25.148 "params": { 00:17:25.148 "trtype": "pcie", 00:17:25.148 "traddr": "0000:00:10.0", 00:17:25.148 "name": "Nvme0" 00:17:25.148 }, 00:17:25.148 "method": "bdev_nvme_attach_controller" 00:17:25.148 }, 00:17:25.148 { 00:17:25.148 "method": "bdev_wait_for_examine" 00:17:25.148 } 00:17:25.148 ] 00:17:25.148 } 00:17:25.148 ] 00:17:25.148 } 00:17:25.148 [2024-11-20 07:27:28.970513] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:25.148 [2024-11-20 07:27:28.970752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74692 ] 00:17:25.407 [2024-11-20 07:27:29.146854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.407 [2024-11-20 07:27:29.275122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.974  [2024-11-20T07:27:31.284Z] Copying: 48/48 [kB] (average 46 MBps) 00:17:27.351 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:27.351 07:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 { 00:17:27.351 "subsystems": [ 00:17:27.351 { 00:17:27.351 "subsystem": "bdev", 00:17:27.351 "config": [ 00:17:27.351 { 00:17:27.351 "params": { 00:17:27.351 "trtype": "pcie", 00:17:27.351 "traddr": "0000:00:10.0", 00:17:27.351 "name": "Nvme0" 00:17:27.351 }, 00:17:27.351 "method": "bdev_nvme_attach_controller" 00:17:27.351 }, 00:17:27.351 { 00:17:27.351 "method": "bdev_wait_for_examine" 00:17:27.351 } 00:17:27.351 ] 00:17:27.351 } 00:17:27.351 ] 00:17:27.351 } 00:17:27.351 [2024-11-20 07:27:30.970457] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:27.351 [2024-11-20 07:27:30.970768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74723 ] 00:17:27.351 [2024-11-20 07:27:31.159832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.610 [2024-11-20 07:27:31.280374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.869  [2024-11-20T07:27:32.739Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:28.806 00:17:28.806 00:17:28.806 real 0m36.545s 00:17:28.806 user 0m30.485s 00:17:28.806 sys 0m4.522s 00:17:28.806 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.806 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 ************************************ 00:17:28.806 END TEST dd_rw 00:17:28.806 ************************************ 00:17:28.807 07:27:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:17:28.807 07:27:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:28.807 07:27:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.807 07:27:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:17:29.065 ************************************ 00:17:29.065 START TEST dd_rw_offset 00:17:29.065 ************************************ 00:17:29.065 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:17:29.065 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:17:29.065 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:17:29.065 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:17:29.065 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=n3qb124f9y61qd6t6b0wbzyobqlwp7yy340zo6iph9brj0vv4yivm3oi2m28as0r9ns1xecr8oacyt04du7ypl6ngda3jhr3gpna5bsz2agmze6l96i3oam4qk9sdctbpti3sopu7vahkc6l2q38po57byyzmi83jrjwzih1obhjerqdsjsq40q5briofuljqdvb9k17kjjv6ht5535sc8009mjt7f33q3m7x5l4n9nem7f6lj2ix5stxjj9ioatajdnkyscdb1pcyjjklwd24a98iuppozpokxzo72ihgnnxgeatxrmk2ohu07i0vfut2pfhrjr4agvcdvzzdsaku6sq1hzx6pi6rt5rkpuek26ruibbyrukgtpmd2ydh5xk0ncletuzbz0mf492f4flv0dslhvf2atfi88qh1171ta6uriiova9bo8jlo50ka08ufxahjn2lz3b4c377nve4i26ep9rbla7ltiewxd46vc37x9rlcff0d7dujva3kqbbwhtphm9grsrng8rrsytmusc8gv2hkahrwusy1i8xxci3io94ugmv288gdxkkdsf3cops822zdii2mtmisakmqstmr548jufgmo06k1pjn2ty1d6otv7rjakpauebv42lc7fjjlj106iz8me2s98olps86r56mjn2ilw2oajszhe69uunz6lei9mlasoizt48qccz2iah73ser7o3gkix727alkophidlndhk5cjmeac3vu77gx7tf5112cg2e2vz7q2lrvt4vk23bjcb2ipk4eyafsxn8mh0siqnx07nzqg4ipb6sljfnjnhx2w61mqb5rz6ob9tb3jvwd5pfg0vvg3pt27h0akhwss7exh2sjiywzqmw8tovoqg3zdxk0ffu6bc8fl1hn6v4tdvianlaiew405vpcakefh1ayw12daoe8a5yy6ztxj66et9y5pqj7v99dpbe2ots3s18dxwse5ti73xmdp3q4a5kxp4gyzmuhvgd80k2iu319n7duo6lltbqfb0c5ya0xypvfseuyii4q6qulb50bqdea8gge4urq837zbdoi2dulcicat9402e1dhlqtnestylfmc694rehhw1n99bk2buhfblrzh5vl98ccnnqyvieh86kls9b84av5f2n0fps8vbktsil4joktqht9dvhcgm7y76odxw77hgj0eek34cd2c1n9ms6lljdpvsmuak7cbvn0r9025fjwt7wyjzydzj1wjlba4rpmplqtue9lemureseg789e0kmes6rw9qei71wxwo10r8sqbplx7nar6tok93aad8seyt2y5buco1jcclu3wqf0t6qf8pr217e99wsfkauzedv2whrqs06pmwab3o3357dm2b0348qek8688tammhipgj28iixnp79iuryvpv0vty19vuc9ecfu8uy0qycll60pzrqvky6rhejh28kfhgujqmehlxb59mnglupuqpq0l1f0s4t48wot8437g4w6u2khbaj58j6vmxwo7ba9gmuybric70jiv7laqpmugzhlikw5qykg1r25vlsjkwbe5ruaznx1khkj9mt72jcj5eaqls3g6osd6rbxltufx8l1ymhenhhx78aqbedh1o7ryuz7zbriv5nuv0esfs4igkz8b6au8mf1rfaxiwbj6q40khppml070sj0ov5wrpfigre08zmnjbglze9upq3efe05f0ky9c9xy7id6x6c076t89yrb572yk4ouu2natprqv5gtxg7tnsu9xbxwu0wddcl808kfdh1djpcht2megpv4o049m2out6u227cw2xlnpopd91h65dojmkra8mplntip0rtd555f27843tnrmznngkxxgscb2nh7q05f3dmdfdkudeo6knja2ldp107lo3tmelbglyxorts6xggk3hpu17msor4310ulb5xzqii42ucazupb9r4bri0tfqk48v58h8lrxrcyjz2de5ubfuuprq69sr8hu54x8jhagqkgyqanf1cguza8egqvgl79q1swp5az0hocsu3pz1g8ockmoc0uktl6py2i1e9neju9kukf3j6j9y539242c3c6hrjal1yrmq0bwdgi70jrnc56rjczt4dc3tu57uqe3m5yn3e39ji5i5pc7bvvcwx2h7wz1owny4nn5p19rpa3q1h3rexdzg9qjcsl3m6o63x2qfrl13wejsqerm0tgxb2k53oo0uyk0ixazuib4xvq9g9kfadm54uc646msl5xsalsx6iojbcf6p3i6dcuaguk2gsoy7547h3k8bgcoess70nf2j6rsefp6niqpbegmylkbcoa5z18gzetkprer1g1603gb3acnexjgywzs6vedbsnc7f5s8s51abazkb9z5w1tgvu3ogfljfsz2t7vusdcr0po51jwwsyql247k28mpa91t8dkd5arw86f2xo7321ukx8kz70j7k3xwugzyvnugzine1du68wmbmb1i2pp8fbr91aktd078p6p4rotljj7qdq1344x6ujrm2576koo9goq4xrvl8pgecd1ef8gealpfk4dx04oz7jnj64bywpfh8i9q0k2jekvmu463imm2end2665rrwvid3m3ufozkgxy9v3nc9kqpnfx4jahabznobm9mrbnvbk0hq6chbvl5dtmsbi20j7981pknnd2azwgt26tu4xs0md7k0187vppes7qccpt18krw28eavhh6hm1cqcba70m9ac2w18pjwm1zuq7cz8rbssexlgs73sv3r8u9kblq2y8mj2phnmw9cqctniimef9go3gp6tddu31jebv44oyo54vc4xh51eiwy68kbjlb1tz08zhnoont1rdup6cwsfyvl1svkwbfn4qziafmqlhnxjlkbd14jkpp4janw7lkedwwnekh9xjsu8ksgqsi2y0imwwnrnuwhl9euy54ndvevjq3x1q9e4p4zz5j05cqvx0fhrmk0iqss8cmihxic5kx6zsfcmg907r0ornd3dz5y6m1mrdh5f7v2tzxrynfz1jwofesz8zowi0nbxly51zfgz2qtk6xbpp7pmrh2bnigeoxy7qbq3k8pv6ymc5arnq9n7l1std8eqwq40nkysnrt3bo0ppogj3u58n18rp4gqf7t6356izanx4enzlc8zsgdwncntw6btoc11615o6kqswaj4srum7wozxjafza9ova4i13mnu9ghs93gpp6scsbrop1kihkv22n2zu0q3p2xxvlck3xpmnxunodsw5st4dw7hur2l9st9mkb44i2akyhj338ec0v21toe531p6i2bum6y3tofve8ys834pi4onl3pfd8oh5antxs72vx9lu5m9dpmox2fypgxecmi8u2eh337trv3pzzapr5b6nos3ratbe3ywv2uatm5fl0xut7bskvv7vn6ounnlic6jauoa4r2se4f2xi1iam0le5n0umjxxdhboxvr308yzp1np2d4j5hbrqer0xpxhaambbuh39gntor9q5u9elzs5xdf1vygmytszlxb57g88elmejgn6g41ccwphz1elnsj7z76nz6l9adrl9dmc3suio3azimj4sjgikh19yzcay6ssvc7b2zvbc9c55l4so95bxc195pc3o1qiin7lfkhr6g8uybpckzpswnga70elcl1x2mjjgap34v3jdge83b2qh5eqwq0ynr5v9unownfubtdsmawjp78mpyiv7rrgb1vtfzmrek9jat3kd0bsidq8f0b62dq2jco6fw3jqtpo7zeyzvssmqf8tcjwy82t7c7jnvjbmxw92uopp5inxsmb2xbqtx5im2njbounv7hzbboep4o518e8k2b9acev9ustmavo591ceorid751c9oqjkyv6zcjicr6mofh5y2iqlgxsy6pc8dl41ocj8p4azlz11dtq4oopd2ai41miqee4uxwqbm0lfaxh6cjlf3re28klhqx0r9fuvg4tl6ugys2jjxqk77msubyk6cnkk11kjteyedbx3wq6nbhy4qird2ap5odmogcl6t8411a9lofgr 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:17:29.066 07:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:17:29.066 { 00:17:29.066 "subsystems": [ 00:17:29.066 { 00:17:29.066 "subsystem": "bdev", 00:17:29.066 "config": [ 00:17:29.066 { 00:17:29.066 "params": { 00:17:29.066 "trtype": "pcie", 00:17:29.066 "traddr": "0000:00:10.0", 00:17:29.066 "name": "Nvme0" 00:17:29.066 }, 00:17:29.066 "method": "bdev_nvme_attach_controller" 00:17:29.066 }, 00:17:29.066 { 00:17:29.066 "method": "bdev_wait_for_examine" 00:17:29.066 } 00:17:29.066 ] 00:17:29.066 } 00:17:29.066 ] 00:17:29.066 } 00:17:29.066 [2024-11-20 07:27:32.843651] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:29.066 [2024-11-20 07:27:32.843917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74764 ] 00:17:29.325 [2024-11-20 07:27:33.023281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.325 [2024-11-20 07:27:33.147398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.893  [2024-11-20T07:27:35.203Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:17:31.270 00:17:31.270 07:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:17:31.271 07:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:17:31.271 07:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:17:31.271 07:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:17:31.271 { 00:17:31.271 "subsystems": [ 00:17:31.271 { 00:17:31.271 "subsystem": "bdev", 00:17:31.271 "config": [ 00:17:31.271 { 00:17:31.271 "params": { 00:17:31.271 "trtype": "pcie", 00:17:31.271 "traddr": "0000:00:10.0", 00:17:31.271 "name": "Nvme0" 00:17:31.271 }, 00:17:31.271 "method": "bdev_nvme_attach_controller" 00:17:31.271 }, 00:17:31.271 { 00:17:31.271 "method": "bdev_wait_for_examine" 00:17:31.271 } 00:17:31.271 ] 00:17:31.271 } 00:17:31.271 ] 00:17:31.271 } 00:17:31.271 [2024-11-20 07:27:34.839022] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:31.271 [2024-11-20 07:27:34.839146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74793 ] 00:17:31.271 [2024-11-20 07:27:35.014521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.271 [2024-11-20 07:27:35.135497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.839  [2024-11-20T07:27:36.711Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:17:32.778 00:17:32.778 07:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:17:32.778 ************************************ 00:17:32.778 END TEST dd_rw_offset 00:17:32.778 ************************************ 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ n3qb124f9y61qd6t6b0wbzyobqlwp7yy340zo6iph9brj0vv4yivm3oi2m28as0r9ns1xecr8oacyt04du7ypl6ngda3jhr3gpna5bsz2agmze6l96i3oam4qk9sdctbpti3sopu7vahkc6l2q38po57byyzmi83jrjwzih1obhjerqdsjsq40q5briofuljqdvb9k17kjjv6ht5535sc8009mjt7f33q3m7x5l4n9nem7f6lj2ix5stxjj9ioatajdnkyscdb1pcyjjklwd24a98iuppozpokxzo72ihgnnxgeatxrmk2ohu07i0vfut2pfhrjr4agvcdvzzdsaku6sq1hzx6pi6rt5rkpuek26ruibbyrukgtpmd2ydh5xk0ncletuzbz0mf492f4flv0dslhvf2atfi88qh1171ta6uriiova9bo8jlo50ka08ufxahjn2lz3b4c377nve4i26ep9rbla7ltiewxd46vc37x9rlcff0d7dujva3kqbbwhtphm9grsrng8rrsytmusc8gv2hkahrwusy1i8xxci3io94ugmv288gdxkkdsf3cops822zdii2mtmisakmqstmr548jufgmo06k1pjn2ty1d6otv7rjakpauebv42lc7fjjlj106iz8me2s98olps86r56mjn2ilw2oajszhe69uunz6lei9mlasoizt48qccz2iah73ser7o3gkix727alkophidlndhk5cjmeac3vu77gx7tf5112cg2e2vz7q2lrvt4vk23bjcb2ipk4eyafsxn8mh0siqnx07nzqg4ipb6sljfnjnhx2w61mqb5rz6ob9tb3jvwd5pfg0vvg3pt27h0akhwss7exh2sjiywzqmw8tovoqg3zdxk0ffu6bc8fl1hn6v4tdvianlaiew405vpcakefh1ayw12daoe8a5yy6ztxj66et9y5pqj7v99dpbe2ots3s18dxwse5ti73xmdp3q4a5kxp4gyzmuhvgd80k2iu319n7duo6lltbqfb0c5ya0xypvfseuyii4q6qulb50bqdea8gge4urq837zbdoi2dulcicat9402e1dhlqtnestylfmc694rehhw1n99bk2buhfblrzh5vl98ccnnqyvieh86kls9b84av5f2n0fps8vbktsil4joktqht9dvhcgm7y76odxw77hgj0eek34cd2c1n9ms6lljdpvsmuak7cbvn0r9025fjwt7wyjzydzj1wjlba4rpmplqtue9lemureseg789e0kmes6rw9qei71wxwo10r8sqbplx7nar6tok93aad8seyt2y5buco1jcclu3wqf0t6qf8pr217e99wsfkauzedv2whrqs06pmwab3o3357dm2b0348qek8688tammhipgj28iixnp79iuryvpv0vty19vuc9ecfu8uy0qycll60pzrqvky6rhejh28kfhgujqmehlxb59mnglupuqpq0l1f0s4t48wot8437g4w6u2khbaj58j6vmxwo7ba9gmuybric70jiv7laqpmugzhlikw5qykg1r25vlsjkwbe5ruaznx1khkj9mt72jcj5eaqls3g6osd6rbxltufx8l1ymhenhhx78aqbedh1o7ryuz7zbriv5nuv0esfs4igkz8b6au8mf1rfaxiwbj6q40khppml070sj0ov5wrpfigre08zmnjbglze9upq3efe05f0ky9c9xy7id6x6c076t89yrb572yk4ouu2natprqv5gtxg7tnsu9xbxwu0wddcl808kfdh1djpcht2megpv4o049m2out6u227cw2xlnpopd91h65dojmkra8mplntip0rtd555f27843tnrmznngkxxgscb2nh7q05f3dmdfdkudeo6knja2ldp107lo3tmelbglyxorts6xggk3hpu17msor4310ulb5xzqii42ucazupb9r4bri0tfqk48v58h8lrxrcyjz2de5ubfuuprq69sr8hu54x8jhagqkgyqanf1cguza8egqvgl79q1swp5az0hocsu3pz1g8ockmoc0uktl6py2i1e9neju9kukf3j6j9y539242c3c6hrjal1yrmq0bwdgi70jrnc56rjczt4dc3tu57uqe3m5yn3e39ji5i5pc7bvvcwx2h7wz1owny4nn5p19rpa3q1h3rexdzg9qjcsl3m6o63x2qfrl13wejsqerm0tgxb2k53oo0uyk0ixazuib4xvq9g9kfadm54uc646msl5xsalsx6iojbcf6p3i6dcuaguk2gsoy7547h3k8bgcoess70nf2j6rsefp6niqpbegmylkbcoa5z18gzetkprer1g1603gb3acnexjgywzs6vedbsnc7f5s8s51abazkb9z5w1tgvu3ogfljfsz2t7vusdcr0po51jwwsyql247k28mpa91t8dkd5arw86f2xo7321ukx8kz70j7k3xwugzyvnugzine1du68wmbmb1i2pp8fbr91aktd078p6p4rotljj7qdq1344x6ujrm2576koo9goq4xrvl8pgecd1ef8gealpfk4dx04oz7jnj64bywpfh8i9q0k2jekvmu463imm2end2665rrwvid3m3ufozkgxy9v3nc9kqpnfx4jahabznobm9mrbnvbk0hq6chbvl5dtmsbi20j7981pknnd2azwgt26tu4xs0md7k0187vppes7qccpt18krw28eavhh6hm1cqcba70m9ac2w18pjwm1zuq7cz8rbssexlgs73sv3r8u9kblq2y8mj2phnmw9cqctniimef9go3gp6tddu31jebv44oyo54vc4xh51eiwy68kbjlb1tz08zhnoont1rdup6cwsfyvl1svkwbfn4qziafmqlhnxjlkbd14jkpp4janw7lkedwwnekh9xjsu8ksgqsi2y0imwwnrnuwhl9euy54ndvevjq3x1q9e4p4zz5j05cqvx0fhrmk0iqss8cmihxic5kx6zsfcmg907r0ornd3dz5y6m1mrdh5f7v2tzxrynfz1jwofesz8zowi0nbxly51zfgz2qtk6xbpp7pmrh2bnigeoxy7qbq3k8pv6ymc5arnq9n7l1std8eqwq40nkysnrt3bo0ppogj3u58n18rp4gqf7t6356izanx4enzlc8zsgdwncntw6btoc11615o6kqswaj4srum7wozxjafza9ova4i13mnu9ghs93gpp6scsbrop1kihkv22n2zu0q3p2xxvlck3xpmnxunodsw5st4dw7hur2l9st9mkb44i2akyhj338ec0v21toe531p6i2bum6y3tofve8ys834pi4onl3pfd8oh5antxs72vx9lu5m9dpmox2fypgxecmi8u2eh337trv3pzzapr5b6nos3ratbe3ywv2uatm5fl0xut7bskvv7vn6ounnlic6jauoa4r2se4f2xi1iam0le5n0umjxxdhboxvr308yzp1np2d4j5hbrqer0xpxhaambbuh39gntor9q5u9elzs5xdf1vygmytszlxb57g88elmejgn6g41ccwphz1elnsj7z76nz6l9adrl9dmc3suio3azimj4sjgikh19yzcay6ssvc7b2zvbc9c55l4so95bxc195pc3o1qiin7lfkhr6g8uybpckzpswnga70elcl1x2mjjgap34v3jdge83b2qh5eqwq0ynr5v9unownfubtdsmawjp78mpyiv7rrgb1vtfzmrek9jat3kd0bsidq8f0b62dq2jco6fw3jqtpo7zeyzvssmqf8tcjwy82t7c7jnvjbmxw92uopp5inxsmb2xbqtx5im2njbounv7hzbboep4o518e8k2b9acev9ustmavo591ceorid751c9oqjkyv6zcjicr6mofh5y2iqlgxsy6pc8dl41ocj8p4azlz11dtq4oopd2ai41miqee4uxwqbm0lfaxh6cjlf3re28klhqx0r9fuvg4tl6ugys2jjxqk77msubyk6cnkk11kjteyedbx3wq6nbhy4qird2ap5odmogcl6t8411a9lofgr == \n\3\q\b\1\2\4\f\9\y\6\1\q\d\6\t\6\b\0\w\b\z\y\o\b\q\l\w\p\7\y\y\3\4\0\z\o\6\i\p\h\9\b\r\j\0\v\v\4\y\i\v\m\3\o\i\2\m\2\8\a\s\0\r\9\n\s\1\x\e\c\r\8\o\a\c\y\t\0\4\d\u\7\y\p\l\6\n\g\d\a\3\j\h\r\3\g\p\n\a\5\b\s\z\2\a\g\m\z\e\6\l\9\6\i\3\o\a\m\4\q\k\9\s\d\c\t\b\p\t\i\3\s\o\p\u\7\v\a\h\k\c\6\l\2\q\3\8\p\o\5\7\b\y\y\z\m\i\8\3\j\r\j\w\z\i\h\1\o\b\h\j\e\r\q\d\s\j\s\q\4\0\q\5\b\r\i\o\f\u\l\j\q\d\v\b\9\k\1\7\k\j\j\v\6\h\t\5\5\3\5\s\c\8\0\0\9\m\j\t\7\f\3\3\q\3\m\7\x\5\l\4\n\9\n\e\m\7\f\6\l\j\2\i\x\5\s\t\x\j\j\9\i\o\a\t\a\j\d\n\k\y\s\c\d\b\1\p\c\y\j\j\k\l\w\d\2\4\a\9\8\i\u\p\p\o\z\p\o\k\x\z\o\7\2\i\h\g\n\n\x\g\e\a\t\x\r\m\k\2\o\h\u\0\7\i\0\v\f\u\t\2\p\f\h\r\j\r\4\a\g\v\c\d\v\z\z\d\s\a\k\u\6\s\q\1\h\z\x\6\p\i\6\r\t\5\r\k\p\u\e\k\2\6\r\u\i\b\b\y\r\u\k\g\t\p\m\d\2\y\d\h\5\x\k\0\n\c\l\e\t\u\z\b\z\0\m\f\4\9\2\f\4\f\l\v\0\d\s\l\h\v\f\2\a\t\f\i\8\8\q\h\1\1\7\1\t\a\6\u\r\i\i\o\v\a\9\b\o\8\j\l\o\5\0\k\a\0\8\u\f\x\a\h\j\n\2\l\z\3\b\4\c\3\7\7\n\v\e\4\i\2\6\e\p\9\r\b\l\a\7\l\t\i\e\w\x\d\4\6\v\c\3\7\x\9\r\l\c\f\f\0\d\7\d\u\j\v\a\3\k\q\b\b\w\h\t\p\h\m\9\g\r\s\r\n\g\8\r\r\s\y\t\m\u\s\c\8\g\v\2\h\k\a\h\r\w\u\s\y\1\i\8\x\x\c\i\3\i\o\9\4\u\g\m\v\2\8\8\g\d\x\k\k\d\s\f\3\c\o\p\s\8\2\2\z\d\i\i\2\m\t\m\i\s\a\k\m\q\s\t\m\r\5\4\8\j\u\f\g\m\o\0\6\k\1\p\j\n\2\t\y\1\d\6\o\t\v\7\r\j\a\k\p\a\u\e\b\v\4\2\l\c\7\f\j\j\l\j\1\0\6\i\z\8\m\e\2\s\9\8\o\l\p\s\8\6\r\5\6\m\j\n\2\i\l\w\2\o\a\j\s\z\h\e\6\9\u\u\n\z\6\l\e\i\9\m\l\a\s\o\i\z\t\4\8\q\c\c\z\2\i\a\h\7\3\s\e\r\7\o\3\g\k\i\x\7\2\7\a\l\k\o\p\h\i\d\l\n\d\h\k\5\c\j\m\e\a\c\3\v\u\7\7\g\x\7\t\f\5\1\1\2\c\g\2\e\2\v\z\7\q\2\l\r\v\t\4\v\k\2\3\b\j\c\b\2\i\p\k\4\e\y\a\f\s\x\n\8\m\h\0\s\i\q\n\x\0\7\n\z\q\g\4\i\p\b\6\s\l\j\f\n\j\n\h\x\2\w\6\1\m\q\b\5\r\z\6\o\b\9\t\b\3\j\v\w\d\5\p\f\g\0\v\v\g\3\p\t\2\7\h\0\a\k\h\w\s\s\7\e\x\h\2\s\j\i\y\w\z\q\m\w\8\t\o\v\o\q\g\3\z\d\x\k\0\f\f\u\6\b\c\8\f\l\1\h\n\6\v\4\t\d\v\i\a\n\l\a\i\e\w\4\0\5\v\p\c\a\k\e\f\h\1\a\y\w\1\2\d\a\o\e\8\a\5\y\y\6\z\t\x\j\6\6\e\t\9\y\5\p\q\j\7\v\9\9\d\p\b\e\2\o\t\s\3\s\1\8\d\x\w\s\e\5\t\i\7\3\x\m\d\p\3\q\4\a\5\k\x\p\4\g\y\z\m\u\h\v\g\d\8\0\k\2\i\u\3\1\9\n\7\d\u\o\6\l\l\t\b\q\f\b\0\c\5\y\a\0\x\y\p\v\f\s\e\u\y\i\i\4\q\6\q\u\l\b\5\0\b\q\d\e\a\8\g\g\e\4\u\r\q\8\3\7\z\b\d\o\i\2\d\u\l\c\i\c\a\t\9\4\0\2\e\1\d\h\l\q\t\n\e\s\t\y\l\f\m\c\6\9\4\r\e\h\h\w\1\n\9\9\b\k\2\b\u\h\f\b\l\r\z\h\5\v\l\9\8\c\c\n\n\q\y\v\i\e\h\8\6\k\l\s\9\b\8\4\a\v\5\f\2\n\0\f\p\s\8\v\b\k\t\s\i\l\4\j\o\k\t\q\h\t\9\d\v\h\c\g\m\7\y\7\6\o\d\x\w\7\7\h\g\j\0\e\e\k\3\4\c\d\2\c\1\n\9\m\s\6\l\l\j\d\p\v\s\m\u\a\k\7\c\b\v\n\0\r\9\0\2\5\f\j\w\t\7\w\y\j\z\y\d\z\j\1\w\j\l\b\a\4\r\p\m\p\l\q\t\u\e\9\l\e\m\u\r\e\s\e\g\7\8\9\e\0\k\m\e\s\6\r\w\9\q\e\i\7\1\w\x\w\o\1\0\r\8\s\q\b\p\l\x\7\n\a\r\6\t\o\k\9\3\a\a\d\8\s\e\y\t\2\y\5\b\u\c\o\1\j\c\c\l\u\3\w\q\f\0\t\6\q\f\8\p\r\2\1\7\e\9\9\w\s\f\k\a\u\z\e\d\v\2\w\h\r\q\s\0\6\p\m\w\a\b\3\o\3\3\5\7\d\m\2\b\0\3\4\8\q\e\k\8\6\8\8\t\a\m\m\h\i\p\g\j\2\8\i\i\x\n\p\7\9\i\u\r\y\v\p\v\0\v\t\y\1\9\v\u\c\9\e\c\f\u\8\u\y\0\q\y\c\l\l\6\0\p\z\r\q\v\k\y\6\r\h\e\j\h\2\8\k\f\h\g\u\j\q\m\e\h\l\x\b\5\9\m\n\g\l\u\p\u\q\p\q\0\l\1\f\0\s\4\t\4\8\w\o\t\8\4\3\7\g\4\w\6\u\2\k\h\b\a\j\5\8\j\6\v\m\x\w\o\7\b\a\9\g\m\u\y\b\r\i\c\7\0\j\i\v\7\l\a\q\p\m\u\g\z\h\l\i\k\w\5\q\y\k\g\1\r\2\5\v\l\s\j\k\w\b\e\5\r\u\a\z\n\x\1\k\h\k\j\9\m\t\7\2\j\c\j\5\e\a\q\l\s\3\g\6\o\s\d\6\r\b\x\l\t\u\f\x\8\l\1\y\m\h\e\n\h\h\x\7\8\a\q\b\e\d\h\1\o\7\r\y\u\z\7\z\b\r\i\v\5\n\u\v\0\e\s\f\s\4\i\g\k\z\8\b\6\a\u\8\m\f\1\r\f\a\x\i\w\b\j\6\q\4\0\k\h\p\p\m\l\0\7\0\s\j\0\o\v\5\w\r\p\f\i\g\r\e\0\8\z\m\n\j\b\g\l\z\e\9\u\p\q\3\e\f\e\0\5\f\0\k\y\9\c\9\x\y\7\i\d\6\x\6\c\0\7\6\t\8\9\y\r\b\5\7\2\y\k\4\o\u\u\2\n\a\t\p\r\q\v\5\g\t\x\g\7\t\n\s\u\9\x\b\x\w\u\0\w\d\d\c\l\8\0\8\k\f\d\h\1\d\j\p\c\h\t\2\m\e\g\p\v\4\o\0\4\9\m\2\o\u\t\6\u\2\2\7\c\w\2\x\l\n\p\o\p\d\9\1\h\6\5\d\o\j\m\k\r\a\8\m\p\l\n\t\i\p\0\r\t\d\5\5\5\f\2\7\8\4\3\t\n\r\m\z\n\n\g\k\x\x\g\s\c\b\2\n\h\7\q\0\5\f\3\d\m\d\f\d\k\u\d\e\o\6\k\n\j\a\2\l\d\p\1\0\7\l\o\3\t\m\e\l\b\g\l\y\x\o\r\t\s\6\x\g\g\k\3\h\p\u\1\7\m\s\o\r\4\3\1\0\u\l\b\5\x\z\q\i\i\4\2\u\c\a\z\u\p\b\9\r\4\b\r\i\0\t\f\q\k\4\8\v\5\8\h\8\l\r\x\r\c\y\j\z\2\d\e\5\u\b\f\u\u\p\r\q\6\9\s\r\8\h\u\5\4\x\8\j\h\a\g\q\k\g\y\q\a\n\f\1\c\g\u\z\a\8\e\g\q\v\g\l\7\9\q\1\s\w\p\5\a\z\0\h\o\c\s\u\3\p\z\1\g\8\o\c\k\m\o\c\0\u\k\t\l\6\p\y\2\i\1\e\9\n\e\j\u\9\k\u\k\f\3\j\6\j\9\y\5\3\9\2\4\2\c\3\c\6\h\r\j\a\l\1\y\r\m\q\0\b\w\d\g\i\7\0\j\r\n\c\5\6\r\j\c\z\t\4\d\c\3\t\u\5\7\u\q\e\3\m\5\y\n\3\e\3\9\j\i\5\i\5\p\c\7\b\v\v\c\w\x\2\h\7\w\z\1\o\w\n\y\4\n\n\5\p\1\9\r\p\a\3\q\1\h\3\r\e\x\d\z\g\9\q\j\c\s\l\3\m\6\o\6\3\x\2\q\f\r\l\1\3\w\e\j\s\q\e\r\m\0\t\g\x\b\2\k\5\3\o\o\0\u\y\k\0\i\x\a\z\u\i\b\4\x\v\q\9\g\9\k\f\a\d\m\5\4\u\c\6\4\6\m\s\l\5\x\s\a\l\s\x\6\i\o\j\b\c\f\6\p\3\i\6\d\c\u\a\g\u\k\2\g\s\o\y\7\5\4\7\h\3\k\8\b\g\c\o\e\s\s\7\0\n\f\2\j\6\r\s\e\f\p\6\n\i\q\p\b\e\g\m\y\l\k\b\c\o\a\5\z\1\8\g\z\e\t\k\p\r\e\r\1\g\1\6\0\3\g\b\3\a\c\n\e\x\j\g\y\w\z\s\6\v\e\d\b\s\n\c\7\f\5\s\8\s\5\1\a\b\a\z\k\b\9\z\5\w\1\t\g\v\u\3\o\g\f\l\j\f\s\z\2\t\7\v\u\s\d\c\r\0\p\o\5\1\j\w\w\s\y\q\l\2\4\7\k\2\8\m\p\a\9\1\t\8\d\k\d\5\a\r\w\8\6\f\2\x\o\7\3\2\1\u\k\x\8\k\z\7\0\j\7\k\3\x\w\u\g\z\y\v\n\u\g\z\i\n\e\1\d\u\6\8\w\m\b\m\b\1\i\2\p\p\8\f\b\r\9\1\a\k\t\d\0\7\8\p\6\p\4\r\o\t\l\j\j\7\q\d\q\1\3\4\4\x\6\u\j\r\m\2\5\7\6\k\o\o\9\g\o\q\4\x\r\v\l\8\p\g\e\c\d\1\e\f\8\g\e\a\l\p\f\k\4\d\x\0\4\o\z\7\j\n\j\6\4\b\y\w\p\f\h\8\i\9\q\0\k\2\j\e\k\v\m\u\4\6\3\i\m\m\2\e\n\d\2\6\6\5\r\r\w\v\i\d\3\m\3\u\f\o\z\k\g\x\y\9\v\3\n\c\9\k\q\p\n\f\x\4\j\a\h\a\b\z\n\o\b\m\9\m\r\b\n\v\b\k\0\h\q\6\c\h\b\v\l\5\d\t\m\s\b\i\2\0\j\7\9\8\1\p\k\n\n\d\2\a\z\w\g\t\2\6\t\u\4\x\s\0\m\d\7\k\0\1\8\7\v\p\p\e\s\7\q\c\c\p\t\1\8\k\r\w\2\8\e\a\v\h\h\6\h\m\1\c\q\c\b\a\7\0\m\9\a\c\2\w\1\8\p\j\w\m\1\z\u\q\7\c\z\8\r\b\s\s\e\x\l\g\s\7\3\s\v\3\r\8\u\9\k\b\l\q\2\y\8\m\j\2\p\h\n\m\w\9\c\q\c\t\n\i\i\m\e\f\9\g\o\3\g\p\6\t\d\d\u\3\1\j\e\b\v\4\4\o\y\o\5\4\v\c\4\x\h\5\1\e\i\w\y\6\8\k\b\j\l\b\1\t\z\0\8\z\h\n\o\o\n\t\1\r\d\u\p\6\c\w\s\f\y\v\l\1\s\v\k\w\b\f\n\4\q\z\i\a\f\m\q\l\h\n\x\j\l\k\b\d\1\4\j\k\p\p\4\j\a\n\w\7\l\k\e\d\w\w\n\e\k\h\9\x\j\s\u\8\k\s\g\q\s\i\2\y\0\i\m\w\w\n\r\n\u\w\h\l\9\e\u\y\5\4\n\d\v\e\v\j\q\3\x\1\q\9\e\4\p\4\z\z\5\j\0\5\c\q\v\x\0\f\h\r\m\k\0\i\q\s\s\8\c\m\i\h\x\i\c\5\k\x\6\z\s\f\c\m\g\9\0\7\r\0\o\r\n\d\3\d\z\5\y\6\m\1\m\r\d\h\5\f\7\v\2\t\z\x\r\y\n\f\z\1\j\w\o\f\e\s\z\8\z\o\w\i\0\n\b\x\l\y\5\1\z\f\g\z\2\q\t\k\6\x\b\p\p\7\p\m\r\h\2\b\n\i\g\e\o\x\y\7\q\b\q\3\k\8\p\v\6\y\m\c\5\a\r\n\q\9\n\7\l\1\s\t\d\8\e\q\w\q\4\0\n\k\y\s\n\r\t\3\b\o\0\p\p\o\g\j\3\u\5\8\n\1\8\r\p\4\g\q\f\7\t\6\3\5\6\i\z\a\n\x\4\e\n\z\l\c\8\z\s\g\d\w\n\c\n\t\w\6\b\t\o\c\1\1\6\1\5\o\6\k\q\s\w\a\j\4\s\r\u\m\7\w\o\z\x\j\a\f\z\a\9\o\v\a\4\i\1\3\m\n\u\9\g\h\s\9\3\g\p\p\6\s\c\s\b\r\o\p\1\k\i\h\k\v\2\2\n\2\z\u\0\q\3\p\2\x\x\v\l\c\k\3\x\p\m\n\x\u\n\o\d\s\w\5\s\t\4\d\w\7\h\u\r\2\l\9\s\t\9\m\k\b\4\4\i\2\a\k\y\h\j\3\3\8\e\c\0\v\2\1\t\o\e\5\3\1\p\6\i\2\b\u\m\6\y\3\t\o\f\v\e\8\y\s\8\3\4\p\i\4\o\n\l\3\p\f\d\8\o\h\5\a\n\t\x\s\7\2\v\x\9\l\u\5\m\9\d\p\m\o\x\2\f\y\p\g\x\e\c\m\i\8\u\2\e\h\3\3\7\t\r\v\3\p\z\z\a\p\r\5\b\6\n\o\s\3\r\a\t\b\e\3\y\w\v\2\u\a\t\m\5\f\l\0\x\u\t\7\b\s\k\v\v\7\v\n\6\o\u\n\n\l\i\c\6\j\a\u\o\a\4\r\2\s\e\4\f\2\x\i\1\i\a\m\0\l\e\5\n\0\u\m\j\x\x\d\h\b\o\x\v\r\3\0\8\y\z\p\1\n\p\2\d\4\j\5\h\b\r\q\e\r\0\x\p\x\h\a\a\m\b\b\u\h\3\9\g\n\t\o\r\9\q\5\u\9\e\l\z\s\5\x\d\f\1\v\y\g\m\y\t\s\z\l\x\b\5\7\g\8\8\e\l\m\e\j\g\n\6\g\4\1\c\c\w\p\h\z\1\e\l\n\s\j\7\z\7\6\n\z\6\l\9\a\d\r\l\9\d\m\c\3\s\u\i\o\3\a\z\i\m\j\4\s\j\g\i\k\h\1\9\y\z\c\a\y\6\s\s\v\c\7\b\2\z\v\b\c\9\c\5\5\l\4\s\o\9\5\b\x\c\1\9\5\p\c\3\o\1\q\i\i\n\7\l\f\k\h\r\6\g\8\u\y\b\p\c\k\z\p\s\w\n\g\a\7\0\e\l\c\l\1\x\2\m\j\j\g\a\p\3\4\v\3\j\d\g\e\8\3\b\2\q\h\5\e\q\w\q\0\y\n\r\5\v\9\u\n\o\w\n\f\u\b\t\d\s\m\a\w\j\p\7\8\m\p\y\i\v\7\r\r\g\b\1\v\t\f\z\m\r\e\k\9\j\a\t\3\k\d\0\b\s\i\d\q\8\f\0\b\6\2\d\q\2\j\c\o\6\f\w\3\j\q\t\p\o\7\z\e\y\z\v\s\s\m\q\f\8\t\c\j\w\y\8\2\t\7\c\7\j\n\v\j\b\m\x\w\9\2\u\o\p\p\5\i\n\x\s\m\b\2\x\b\q\t\x\5\i\m\2\n\j\b\o\u\n\v\7\h\z\b\b\o\e\p\4\o\5\1\8\e\8\k\2\b\9\a\c\e\v\9\u\s\t\m\a\v\o\5\9\1\c\e\o\r\i\d\7\5\1\c\9\o\q\j\k\y\v\6\z\c\j\i\c\r\6\m\o\f\h\5\y\2\i\q\l\g\x\s\y\6\p\c\8\d\l\4\1\o\c\j\8\p\4\a\z\l\z\1\1\d\t\q\4\o\o\p\d\2\a\i\4\1\m\i\q\e\e\4\u\x\w\q\b\m\0\l\f\a\x\h\6\c\j\l\f\3\r\e\2\8\k\l\h\q\x\0\r\9\f\u\v\g\4\t\l\6\u\g\y\s\2\j\j\x\q\k\7\7\m\s\u\b\y\k\6\c\n\k\k\1\1\k\j\t\e\y\e\d\b\x\3\w\q\6\n\b\h\y\4\q\i\r\d\2\a\p\5\o\d\m\o\g\c\l\6\t\8\4\1\1\a\9\l\o\f\g\r ]] 00:17:32.779 00:17:32.779 real 0m3.877s 00:17:32.779 user 0m3.189s 00:17:32.779 sys 0m0.506s 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:17:32.779 07:27:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:17:32.779 { 00:17:32.779 "subsystems": [ 00:17:32.779 { 00:17:32.779 "subsystem": "bdev", 00:17:32.779 "config": [ 00:17:32.779 { 00:17:32.779 "params": { 00:17:32.779 "trtype": "pcie", 00:17:32.779 "traddr": "0000:00:10.0", 00:17:32.779 "name": "Nvme0" 00:17:32.779 }, 00:17:32.779 "method": "bdev_nvme_attach_controller" 00:17:32.779 }, 00:17:32.779 { 00:17:32.779 "method": "bdev_wait_for_examine" 00:17:32.779 } 00:17:32.779 ] 00:17:32.779 } 00:17:32.779 ] 00:17:32.779 } 00:17:33.039 [2024-11-20 07:27:36.732101] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:33.039 [2024-11-20 07:27:36.732226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74834 ] 00:17:33.039 [2024-11-20 07:27:36.890321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.299 [2024-11-20 07:27:37.009061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.558  [2024-11-20T07:27:38.884Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:34.951 00:17:34.951 07:27:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:34.951 ************************************ 00:17:34.951 END TEST spdk_dd_basic_rw 00:17:34.951 ************************************ 00:17:34.951 00:17:34.951 real 0m44.779s 00:17:34.951 user 0m36.945s 00:17:34.951 sys 0m5.883s 00:17:34.951 07:27:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.951 07:27:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:17:34.951 07:27:38 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:34.951 07:27:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.951 07:27:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.951 07:27:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:17:34.951 ************************************ 00:17:34.951 START TEST spdk_dd_posix 00:17:34.951 ************************************ 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:34.951 * Looking for test storage... 00:17:34.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.951 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.952 --rc genhtml_branch_coverage=1 00:17:34.952 --rc genhtml_function_coverage=1 00:17:34.952 --rc genhtml_legend=1 00:17:34.952 --rc geninfo_all_blocks=1 00:17:34.952 --rc geninfo_unexecuted_blocks=1 00:17:34.952 00:17:34.952 ' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.952 --rc genhtml_branch_coverage=1 00:17:34.952 --rc genhtml_function_coverage=1 00:17:34.952 --rc genhtml_legend=1 00:17:34.952 --rc geninfo_all_blocks=1 00:17:34.952 --rc geninfo_unexecuted_blocks=1 00:17:34.952 00:17:34.952 ' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.952 --rc genhtml_branch_coverage=1 00:17:34.952 --rc genhtml_function_coverage=1 00:17:34.952 --rc genhtml_legend=1 00:17:34.952 --rc geninfo_all_blocks=1 00:17:34.952 --rc geninfo_unexecuted_blocks=1 00:17:34.952 00:17:34.952 ' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.952 --rc genhtml_branch_coverage=1 00:17:34.952 --rc genhtml_function_coverage=1 00:17:34.952 --rc genhtml_legend=1 00:17:34.952 --rc geninfo_all_blocks=1 00:17:34.952 --rc geninfo_unexecuted_blocks=1 00:17:34.952 00:17:34.952 ' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # export PATH 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:17:34.952 * First test run, liburing in use 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.952 07:27:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:17:35.213 ************************************ 00:17:35.213 START TEST dd_flag_append 00:17:35.213 ************************************ 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=kblm4uf0qw0wfaqc1gkhsoau3tbeg7xw 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=nqsj3qjll7udpxmm4rvbqo0pm9kcuefc 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s kblm4uf0qw0wfaqc1gkhsoau3tbeg7xw 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s nqsj3qjll7udpxmm4rvbqo0pm9kcuefc 00:17:35.213 07:27:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:17:35.213 [2024-11-20 07:27:38.955100] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:35.213 [2024-11-20 07:27:38.955267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74917 ] 00:17:35.213 [2024-11-20 07:27:39.128377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.478 [2024-11-20 07:27:39.248716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.738  [2024-11-20T07:27:41.051Z] Copying: 32/32 [B] (average 31 kBps) 00:17:37.118 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ nqsj3qjll7udpxmm4rvbqo0pm9kcuefckblm4uf0qw0wfaqc1gkhsoau3tbeg7xw == \n\q\s\j\3\q\j\l\l\7\u\d\p\x\m\m\4\r\v\b\q\o\0\p\m\9\k\c\u\e\f\c\k\b\l\m\4\u\f\0\q\w\0\w\f\a\q\c\1\g\k\h\s\o\a\u\3\t\b\e\g\7\x\w ]] 00:17:37.118 00:17:37.118 real 0m1.862s 00:17:37.118 user 0m1.535s 00:17:37.118 sys 0m0.211s 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:17:37.118 ************************************ 00:17:37.118 END TEST dd_flag_append 00:17:37.118 ************************************ 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:17:37.118 ************************************ 00:17:37.118 START TEST dd_flag_directory 00:17:37.118 ************************************ 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:37.118 07:27:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:37.118 [2024-11-20 07:27:40.882061] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:37.118 [2024-11-20 07:27:40.882204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74961 ] 00:17:37.377 [2024-11-20 07:27:41.058198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.377 [2024-11-20 07:27:41.181956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.637 [2024-11-20 07:27:41.499030] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:37.637 [2024-11-20 07:27:41.499092] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:37.637 [2024-11-20 07:27:41.499112] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:38.577 [2024-11-20 07:27:42.412459] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.837 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.838 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:38.838 07:27:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:17:38.838 [2024-11-20 07:27:42.741886] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:38.838 [2024-11-20 07:27:42.742043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74987 ] 00:17:39.097 [2024-11-20 07:27:42.913709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.356 [2024-11-20 07:27:43.034217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.616 [2024-11-20 07:27:43.350835] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:39.616 [2024-11-20 07:27:43.350917] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:39.616 [2024-11-20 07:27:43.350937] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:40.554 [2024-11-20 07:27:44.264182] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.813 00:17:40.813 real 0m3.729s 00:17:40.813 user 0m3.098s 00:17:40.813 sys 0m0.430s 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:17:40.813 ************************************ 00:17:40.813 END TEST dd_flag_directory 00:17:40.813 ************************************ 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:17:40.813 ************************************ 00:17:40.813 START TEST dd_flag_nofollow 00:17:40.813 ************************************ 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:17:40.813 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:40.814 07:27:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:40.814 [2024-11-20 07:27:44.687436] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:40.814 [2024-11-20 07:27:44.687586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75029 ] 00:17:41.073 [2024-11-20 07:27:44.862926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.073 [2024-11-20 07:27:44.986215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.643 [2024-11-20 07:27:45.317168] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:17:41.643 [2024-11-20 07:27:45.317223] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:17:41.643 [2024-11-20 07:27:45.317243] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:42.584 [2024-11-20 07:27:46.216591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:42.584 07:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:17:42.844 [2024-11-20 07:27:46.554084] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:42.844 [2024-11-20 07:27:46.554221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75056 ] 00:17:42.844 [2024-11-20 07:27:46.725348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.104 [2024-11-20 07:27:46.846237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.373 [2024-11-20 07:27:47.174908] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:17:43.373 [2024-11-20 07:27:47.174955] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:17:43.373 [2024-11-20 07:27:47.174976] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:44.329 [2024-11-20 07:27:48.094280] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 07:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:44.589 [2024-11-20 07:27:48.442353] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:44.589 [2024-11-20 07:27:48.442496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75081 ] 00:17:44.848 [2024-11-20 07:27:48.617104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.848 [2024-11-20 07:27:48.735907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.416  [2024-11-20T07:27:50.288Z] Copying: 512/512 [B] (average 500 kBps) 00:17:46.355 00:17:46.355 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xxyq666dy58m05csd1vvfy0zhkuyaa5w60u5xnpargroe5q7ibs980r8r9sfxwqp6qi45l44yy8oq990ua7visy5sh78as8f0jogwa2zot7md73xt03ykozbuiftmgasofyvkkujl4d5dow94e1hteads7d0qf65lsi0abytajif7axhkvxagi64o2bmrmdltyw2aedrpkub3u3m1dgklvjvmuf4dhmzo3kkz4yj4ag3b331oiu4ni8blbty8ehkj8z9ad32se38jxvszpxtdecks1l6fzki9a068ewqhn3btu3u8ujt038vixz5ogjrar5uexhgufrdfmlrj29w98t9e08qddyrp7rus813a9emr156v46r12iwxt3r4pnw4dp7tyrc9bkrvg1y4cqaz4sgvsgv44buszau7q1rayav0vn6pa5vdponk5g21q9xk8lcochvkjmoeyba5tx0trlo9t15y8lkvkbv9p5lmgla6yc9xhvgo2ubh28pi2os == \x\x\y\q\6\6\6\d\y\5\8\m\0\5\c\s\d\1\v\v\f\y\0\z\h\k\u\y\a\a\5\w\6\0\u\5\x\n\p\a\r\g\r\o\e\5\q\7\i\b\s\9\8\0\r\8\r\9\s\f\x\w\q\p\6\q\i\4\5\l\4\4\y\y\8\o\q\9\9\0\u\a\7\v\i\s\y\5\s\h\7\8\a\s\8\f\0\j\o\g\w\a\2\z\o\t\7\m\d\7\3\x\t\0\3\y\k\o\z\b\u\i\f\t\m\g\a\s\o\f\y\v\k\k\u\j\l\4\d\5\d\o\w\9\4\e\1\h\t\e\a\d\s\7\d\0\q\f\6\5\l\s\i\0\a\b\y\t\a\j\i\f\7\a\x\h\k\v\x\a\g\i\6\4\o\2\b\m\r\m\d\l\t\y\w\2\a\e\d\r\p\k\u\b\3\u\3\m\1\d\g\k\l\v\j\v\m\u\f\4\d\h\m\z\o\3\k\k\z\4\y\j\4\a\g\3\b\3\3\1\o\i\u\4\n\i\8\b\l\b\t\y\8\e\h\k\j\8\z\9\a\d\3\2\s\e\3\8\j\x\v\s\z\p\x\t\d\e\c\k\s\1\l\6\f\z\k\i\9\a\0\6\8\e\w\q\h\n\3\b\t\u\3\u\8\u\j\t\0\3\8\v\i\x\z\5\o\g\j\r\a\r\5\u\e\x\h\g\u\f\r\d\f\m\l\r\j\2\9\w\9\8\t\9\e\0\8\q\d\d\y\r\p\7\r\u\s\8\1\3\a\9\e\m\r\1\5\6\v\4\6\r\1\2\i\w\x\t\3\r\4\p\n\w\4\d\p\7\t\y\r\c\9\b\k\r\v\g\1\y\4\c\q\a\z\4\s\g\v\s\g\v\4\4\b\u\s\z\a\u\7\q\1\r\a\y\a\v\0\v\n\6\p\a\5\v\d\p\o\n\k\5\g\2\1\q\9\x\k\8\l\c\o\c\h\v\k\j\m\o\e\y\b\a\5\t\x\0\t\r\l\o\9\t\1\5\y\8\l\k\v\k\b\v\9\p\5\l\m\g\l\a\6\y\c\9\x\h\v\g\o\2\u\b\h\2\8\p\i\2\o\s ]] 00:17:46.355 00:17:46.355 real 0m5.659s 00:17:46.355 user 0m4.689s 00:17:46.355 sys 0m0.656s 00:17:46.355 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.355 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:17:46.355 ************************************ 00:17:46.355 END TEST dd_flag_nofollow 00:17:46.355 ************************************ 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:17:46.616 ************************************ 00:17:46.616 START TEST dd_flag_noatime 00:17:46.616 ************************************ 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732087669 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732087670 00:17:46.616 07:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:17:47.618 07:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:47.618 [2024-11-20 07:27:51.425428] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:47.618 [2024-11-20 07:27:51.425587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75128 ] 00:17:47.878 [2024-11-20 07:27:51.597562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.878 [2024-11-20 07:27:51.725648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.137  [2024-11-20T07:27:53.450Z] Copying: 512/512 [B] (average 500 kBps) 00:17:49.517 00:17:49.517 07:27:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:49.517 07:27:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732087669 )) 00:17:49.517 07:27:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:49.517 07:27:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732087670 )) 00:17:49.517 07:27:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:49.517 [2024-11-20 07:27:53.402684] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:49.517 [2024-11-20 07:27:53.402920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75157 ] 00:17:49.777 [2024-11-20 07:27:53.592546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.036 [2024-11-20 07:27:53.715049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.295  [2024-11-20T07:27:55.609Z] Copying: 512/512 [B] (average 500 kBps) 00:17:51.676 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732087674 )) 00:17:51.676 00:17:51.676 real 0m4.982s 00:17:51.676 user 0m3.231s 00:17:51.676 sys 0m0.525s 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:17:51.676 ************************************ 00:17:51.676 END TEST dd_flag_noatime 00:17:51.676 ************************************ 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:17:51.676 ************************************ 00:17:51.676 START TEST dd_flags_misc 00:17:51.676 ************************************ 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:51.676 07:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:17:51.676 [2024-11-20 07:27:55.448892] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:51.676 [2024-11-20 07:27:55.449046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75201 ] 00:17:51.935 [2024-11-20 07:27:55.624415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.935 [2024-11-20 07:27:55.751974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.195  [2024-11-20T07:27:57.539Z] Copying: 512/512 [B] (average 500 kBps) 00:17:53.606 00:17:53.606 07:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dq0k1491uk3qfry7imdk45muckabzrxkl0mk8bowor6fq0l0gq60w9qmde4oo2ychd185yvab9kmfq6beilo3keg8m28zzye43nqxhmlq6mdal3rkq8xxbcflixtzzrg45tz56qz8mamq8p8belycw122d4oym6jyhfdpsujnyivk5sngg84jqkn6waqy5izd2w4upn512xv974ju5gae29sgtm3e64hj64kxqnp8dj7pcfz9jhqbr7s2x1lfvjqdwxb45h240ccv7fqw8itqalbj1hb7ssmhfo1kuwr92yk38wunly1xoeoodayqpccgzf3kli1dlcd628vutd38lbcvjvjl5yihjkortz2do1lruy131609sj7mb4qfw7vmp888068xx2lqew2esx4i1z08jdc55bhkjzexyjipdywgbu589po04bgaatl7tflg5irg8f1hv358ofpx12r83kj0jjrb67wzjpwiasf4len5uvalc49d1vcdlb078pc == \d\q\0\k\1\4\9\1\u\k\3\q\f\r\y\7\i\m\d\k\4\5\m\u\c\k\a\b\z\r\x\k\l\0\m\k\8\b\o\w\o\r\6\f\q\0\l\0\g\q\6\0\w\9\q\m\d\e\4\o\o\2\y\c\h\d\1\8\5\y\v\a\b\9\k\m\f\q\6\b\e\i\l\o\3\k\e\g\8\m\2\8\z\z\y\e\4\3\n\q\x\h\m\l\q\6\m\d\a\l\3\r\k\q\8\x\x\b\c\f\l\i\x\t\z\z\r\g\4\5\t\z\5\6\q\z\8\m\a\m\q\8\p\8\b\e\l\y\c\w\1\2\2\d\4\o\y\m\6\j\y\h\f\d\p\s\u\j\n\y\i\v\k\5\s\n\g\g\8\4\j\q\k\n\6\w\a\q\y\5\i\z\d\2\w\4\u\p\n\5\1\2\x\v\9\7\4\j\u\5\g\a\e\2\9\s\g\t\m\3\e\6\4\h\j\6\4\k\x\q\n\p\8\d\j\7\p\c\f\z\9\j\h\q\b\r\7\s\2\x\1\l\f\v\j\q\d\w\x\b\4\5\h\2\4\0\c\c\v\7\f\q\w\8\i\t\q\a\l\b\j\1\h\b\7\s\s\m\h\f\o\1\k\u\w\r\9\2\y\k\3\8\w\u\n\l\y\1\x\o\e\o\o\d\a\y\q\p\c\c\g\z\f\3\k\l\i\1\d\l\c\d\6\2\8\v\u\t\d\3\8\l\b\c\v\j\v\j\l\5\y\i\h\j\k\o\r\t\z\2\d\o\1\l\r\u\y\1\3\1\6\0\9\s\j\7\m\b\4\q\f\w\7\v\m\p\8\8\8\0\6\8\x\x\2\l\q\e\w\2\e\s\x\4\i\1\z\0\8\j\d\c\5\5\b\h\k\j\z\e\x\y\j\i\p\d\y\w\g\b\u\5\8\9\p\o\0\4\b\g\a\a\t\l\7\t\f\l\g\5\i\r\g\8\f\1\h\v\3\5\8\o\f\p\x\1\2\r\8\3\k\j\0\j\j\r\b\6\7\w\z\j\p\w\i\a\s\f\4\l\e\n\5\u\v\a\l\c\4\9\d\1\v\c\d\l\b\0\7\8\p\c ]] 00:17:53.606 07:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:53.606 07:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:17:53.606 [2024-11-20 07:27:57.341863] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:53.606 [2024-11-20 07:27:57.341987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75225 ] 00:17:53.606 [2024-11-20 07:27:57.516233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.864 [2024-11-20 07:27:57.638844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.123  [2024-11-20T07:27:59.434Z] Copying: 512/512 [B] (average 500 kBps) 00:17:55.501 00:17:55.501 07:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dq0k1491uk3qfry7imdk45muckabzrxkl0mk8bowor6fq0l0gq60w9qmde4oo2ychd185yvab9kmfq6beilo3keg8m28zzye43nqxhmlq6mdal3rkq8xxbcflixtzzrg45tz56qz8mamq8p8belycw122d4oym6jyhfdpsujnyivk5sngg84jqkn6waqy5izd2w4upn512xv974ju5gae29sgtm3e64hj64kxqnp8dj7pcfz9jhqbr7s2x1lfvjqdwxb45h240ccv7fqw8itqalbj1hb7ssmhfo1kuwr92yk38wunly1xoeoodayqpccgzf3kli1dlcd628vutd38lbcvjvjl5yihjkortz2do1lruy131609sj7mb4qfw7vmp888068xx2lqew2esx4i1z08jdc55bhkjzexyjipdywgbu589po04bgaatl7tflg5irg8f1hv358ofpx12r83kj0jjrb67wzjpwiasf4len5uvalc49d1vcdlb078pc == \d\q\0\k\1\4\9\1\u\k\3\q\f\r\y\7\i\m\d\k\4\5\m\u\c\k\a\b\z\r\x\k\l\0\m\k\8\b\o\w\o\r\6\f\q\0\l\0\g\q\6\0\w\9\q\m\d\e\4\o\o\2\y\c\h\d\1\8\5\y\v\a\b\9\k\m\f\q\6\b\e\i\l\o\3\k\e\g\8\m\2\8\z\z\y\e\4\3\n\q\x\h\m\l\q\6\m\d\a\l\3\r\k\q\8\x\x\b\c\f\l\i\x\t\z\z\r\g\4\5\t\z\5\6\q\z\8\m\a\m\q\8\p\8\b\e\l\y\c\w\1\2\2\d\4\o\y\m\6\j\y\h\f\d\p\s\u\j\n\y\i\v\k\5\s\n\g\g\8\4\j\q\k\n\6\w\a\q\y\5\i\z\d\2\w\4\u\p\n\5\1\2\x\v\9\7\4\j\u\5\g\a\e\2\9\s\g\t\m\3\e\6\4\h\j\6\4\k\x\q\n\p\8\d\j\7\p\c\f\z\9\j\h\q\b\r\7\s\2\x\1\l\f\v\j\q\d\w\x\b\4\5\h\2\4\0\c\c\v\7\f\q\w\8\i\t\q\a\l\b\j\1\h\b\7\s\s\m\h\f\o\1\k\u\w\r\9\2\y\k\3\8\w\u\n\l\y\1\x\o\e\o\o\d\a\y\q\p\c\c\g\z\f\3\k\l\i\1\d\l\c\d\6\2\8\v\u\t\d\3\8\l\b\c\v\j\v\j\l\5\y\i\h\j\k\o\r\t\z\2\d\o\1\l\r\u\y\1\3\1\6\0\9\s\j\7\m\b\4\q\f\w\7\v\m\p\8\8\8\0\6\8\x\x\2\l\q\e\w\2\e\s\x\4\i\1\z\0\8\j\d\c\5\5\b\h\k\j\z\e\x\y\j\i\p\d\y\w\g\b\u\5\8\9\p\o\0\4\b\g\a\a\t\l\7\t\f\l\g\5\i\r\g\8\f\1\h\v\3\5\8\o\f\p\x\1\2\r\8\3\k\j\0\j\j\r\b\6\7\w\z\j\p\w\i\a\s\f\4\l\e\n\5\u\v\a\l\c\4\9\d\1\v\c\d\l\b\0\7\8\p\c ]] 00:17:55.501 07:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:55.501 07:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:17:55.501 [2024-11-20 07:27:59.243790] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:55.501 [2024-11-20 07:27:59.243947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75246 ] 00:17:55.501 [2024-11-20 07:27:59.416271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.760 [2024-11-20 07:27:59.537896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.019  [2024-11-20T07:28:01.331Z] Copying: 512/512 [B] (average 100 kBps) 00:17:57.398 00:17:57.398 07:28:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dq0k1491uk3qfry7imdk45muckabzrxkl0mk8bowor6fq0l0gq60w9qmde4oo2ychd185yvab9kmfq6beilo3keg8m28zzye43nqxhmlq6mdal3rkq8xxbcflixtzzrg45tz56qz8mamq8p8belycw122d4oym6jyhfdpsujnyivk5sngg84jqkn6waqy5izd2w4upn512xv974ju5gae29sgtm3e64hj64kxqnp8dj7pcfz9jhqbr7s2x1lfvjqdwxb45h240ccv7fqw8itqalbj1hb7ssmhfo1kuwr92yk38wunly1xoeoodayqpccgzf3kli1dlcd628vutd38lbcvjvjl5yihjkortz2do1lruy131609sj7mb4qfw7vmp888068xx2lqew2esx4i1z08jdc55bhkjzexyjipdywgbu589po04bgaatl7tflg5irg8f1hv358ofpx12r83kj0jjrb67wzjpwiasf4len5uvalc49d1vcdlb078pc == \d\q\0\k\1\4\9\1\u\k\3\q\f\r\y\7\i\m\d\k\4\5\m\u\c\k\a\b\z\r\x\k\l\0\m\k\8\b\o\w\o\r\6\f\q\0\l\0\g\q\6\0\w\9\q\m\d\e\4\o\o\2\y\c\h\d\1\8\5\y\v\a\b\9\k\m\f\q\6\b\e\i\l\o\3\k\e\g\8\m\2\8\z\z\y\e\4\3\n\q\x\h\m\l\q\6\m\d\a\l\3\r\k\q\8\x\x\b\c\f\l\i\x\t\z\z\r\g\4\5\t\z\5\6\q\z\8\m\a\m\q\8\p\8\b\e\l\y\c\w\1\2\2\d\4\o\y\m\6\j\y\h\f\d\p\s\u\j\n\y\i\v\k\5\s\n\g\g\8\4\j\q\k\n\6\w\a\q\y\5\i\z\d\2\w\4\u\p\n\5\1\2\x\v\9\7\4\j\u\5\g\a\e\2\9\s\g\t\m\3\e\6\4\h\j\6\4\k\x\q\n\p\8\d\j\7\p\c\f\z\9\j\h\q\b\r\7\s\2\x\1\l\f\v\j\q\d\w\x\b\4\5\h\2\4\0\c\c\v\7\f\q\w\8\i\t\q\a\l\b\j\1\h\b\7\s\s\m\h\f\o\1\k\u\w\r\9\2\y\k\3\8\w\u\n\l\y\1\x\o\e\o\o\d\a\y\q\p\c\c\g\z\f\3\k\l\i\1\d\l\c\d\6\2\8\v\u\t\d\3\8\l\b\c\v\j\v\j\l\5\y\i\h\j\k\o\r\t\z\2\d\o\1\l\r\u\y\1\3\1\6\0\9\s\j\7\m\b\4\q\f\w\7\v\m\p\8\8\8\0\6\8\x\x\2\l\q\e\w\2\e\s\x\4\i\1\z\0\8\j\d\c\5\5\b\h\k\j\z\e\x\y\j\i\p\d\y\w\g\b\u\5\8\9\p\o\0\4\b\g\a\a\t\l\7\t\f\l\g\5\i\r\g\8\f\1\h\v\3\5\8\o\f\p\x\1\2\r\8\3\k\j\0\j\j\r\b\6\7\w\z\j\p\w\i\a\s\f\4\l\e\n\5\u\v\a\l\c\4\9\d\1\v\c\d\l\b\0\7\8\p\c ]] 00:17:57.398 07:28:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:57.398 07:28:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:17:57.398 [2024-11-20 07:28:01.113368] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:57.399 [2024-11-20 07:28:01.113501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75271 ] 00:17:57.399 [2024-11-20 07:28:01.287819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.657 [2024-11-20 07:28:01.407323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.917  [2024-11-20T07:28:03.234Z] Copying: 512/512 [B] (average 125 kBps) 00:17:59.301 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dq0k1491uk3qfry7imdk45muckabzrxkl0mk8bowor6fq0l0gq60w9qmde4oo2ychd185yvab9kmfq6beilo3keg8m28zzye43nqxhmlq6mdal3rkq8xxbcflixtzzrg45tz56qz8mamq8p8belycw122d4oym6jyhfdpsujnyivk5sngg84jqkn6waqy5izd2w4upn512xv974ju5gae29sgtm3e64hj64kxqnp8dj7pcfz9jhqbr7s2x1lfvjqdwxb45h240ccv7fqw8itqalbj1hb7ssmhfo1kuwr92yk38wunly1xoeoodayqpccgzf3kli1dlcd628vutd38lbcvjvjl5yihjkortz2do1lruy131609sj7mb4qfw7vmp888068xx2lqew2esx4i1z08jdc55bhkjzexyjipdywgbu589po04bgaatl7tflg5irg8f1hv358ofpx12r83kj0jjrb67wzjpwiasf4len5uvalc49d1vcdlb078pc == \d\q\0\k\1\4\9\1\u\k\3\q\f\r\y\7\i\m\d\k\4\5\m\u\c\k\a\b\z\r\x\k\l\0\m\k\8\b\o\w\o\r\6\f\q\0\l\0\g\q\6\0\w\9\q\m\d\e\4\o\o\2\y\c\h\d\1\8\5\y\v\a\b\9\k\m\f\q\6\b\e\i\l\o\3\k\e\g\8\m\2\8\z\z\y\e\4\3\n\q\x\h\m\l\q\6\m\d\a\l\3\r\k\q\8\x\x\b\c\f\l\i\x\t\z\z\r\g\4\5\t\z\5\6\q\z\8\m\a\m\q\8\p\8\b\e\l\y\c\w\1\2\2\d\4\o\y\m\6\j\y\h\f\d\p\s\u\j\n\y\i\v\k\5\s\n\g\g\8\4\j\q\k\n\6\w\a\q\y\5\i\z\d\2\w\4\u\p\n\5\1\2\x\v\9\7\4\j\u\5\g\a\e\2\9\s\g\t\m\3\e\6\4\h\j\6\4\k\x\q\n\p\8\d\j\7\p\c\f\z\9\j\h\q\b\r\7\s\2\x\1\l\f\v\j\q\d\w\x\b\4\5\h\2\4\0\c\c\v\7\f\q\w\8\i\t\q\a\l\b\j\1\h\b\7\s\s\m\h\f\o\1\k\u\w\r\9\2\y\k\3\8\w\u\n\l\y\1\x\o\e\o\o\d\a\y\q\p\c\c\g\z\f\3\k\l\i\1\d\l\c\d\6\2\8\v\u\t\d\3\8\l\b\c\v\j\v\j\l\5\y\i\h\j\k\o\r\t\z\2\d\o\1\l\r\u\y\1\3\1\6\0\9\s\j\7\m\b\4\q\f\w\7\v\m\p\8\8\8\0\6\8\x\x\2\l\q\e\w\2\e\s\x\4\i\1\z\0\8\j\d\c\5\5\b\h\k\j\z\e\x\y\j\i\p\d\y\w\g\b\u\5\8\9\p\o\0\4\b\g\a\a\t\l\7\t\f\l\g\5\i\r\g\8\f\1\h\v\3\5\8\o\f\p\x\1\2\r\8\3\k\j\0\j\j\r\b\6\7\w\z\j\p\w\i\a\s\f\4\l\e\n\5\u\v\a\l\c\4\9\d\1\v\c\d\l\b\0\7\8\p\c ]] 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:17:59.301 07:28:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:17:59.301 [2024-11-20 07:28:03.006182] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:17:59.301 [2024-11-20 07:28:03.006310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75296 ] 00:17:59.301 [2024-11-20 07:28:03.179214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.560 [2024-11-20 07:28:03.300204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.821  [2024-11-20T07:28:05.134Z] Copying: 512/512 [B] (average 500 kBps) 00:18:01.201 00:18:01.201 07:28:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7guusyytkfkprs7n4jwga0labr9tsge87v9hrrkbib8erdg7yzlowj01v3szc0s0cjsfniwiixflsl8f33giygt16dc21f2vgm276c65lac19bjvx2456oksfo0jo80gzp9t1ag859xy45wi6n6gi9vm97td8l2q3earu3fta8ifdcat374scp03wmvoxhvl3n0vavt7y1cet67zdgsova3an321kmxzjv64ix3pzj51y2jdwcbsss2zw48vcm5iy4mrp7entkhp9x94xrdz39h4zdm3w94fdtnn453amfek1ahqzfwy5z6ihwyw0k4js43lpypu3eyf8om31b5j6jai1jwjawlzn2ztg31xj1e76x3k460purttd8nadcvshpe94dy01w40wmbc583tz78ukgvlamqlhvtrtwwtko3d6h1jewu753vdsod8zh477qldty2kyk6h8x7omi9j9hnm4b1mphmoknjwc3qqofyhzowb074wv37zt2be0wla == \7\g\u\u\s\y\y\t\k\f\k\p\r\s\7\n\4\j\w\g\a\0\l\a\b\r\9\t\s\g\e\8\7\v\9\h\r\r\k\b\i\b\8\e\r\d\g\7\y\z\l\o\w\j\0\1\v\3\s\z\c\0\s\0\c\j\s\f\n\i\w\i\i\x\f\l\s\l\8\f\3\3\g\i\y\g\t\1\6\d\c\2\1\f\2\v\g\m\2\7\6\c\6\5\l\a\c\1\9\b\j\v\x\2\4\5\6\o\k\s\f\o\0\j\o\8\0\g\z\p\9\t\1\a\g\8\5\9\x\y\4\5\w\i\6\n\6\g\i\9\v\m\9\7\t\d\8\l\2\q\3\e\a\r\u\3\f\t\a\8\i\f\d\c\a\t\3\7\4\s\c\p\0\3\w\m\v\o\x\h\v\l\3\n\0\v\a\v\t\7\y\1\c\e\t\6\7\z\d\g\s\o\v\a\3\a\n\3\2\1\k\m\x\z\j\v\6\4\i\x\3\p\z\j\5\1\y\2\j\d\w\c\b\s\s\s\2\z\w\4\8\v\c\m\5\i\y\4\m\r\p\7\e\n\t\k\h\p\9\x\9\4\x\r\d\z\3\9\h\4\z\d\m\3\w\9\4\f\d\t\n\n\4\5\3\a\m\f\e\k\1\a\h\q\z\f\w\y\5\z\6\i\h\w\y\w\0\k\4\j\s\4\3\l\p\y\p\u\3\e\y\f\8\o\m\3\1\b\5\j\6\j\a\i\1\j\w\j\a\w\l\z\n\2\z\t\g\3\1\x\j\1\e\7\6\x\3\k\4\6\0\p\u\r\t\t\d\8\n\a\d\c\v\s\h\p\e\9\4\d\y\0\1\w\4\0\w\m\b\c\5\8\3\t\z\7\8\u\k\g\v\l\a\m\q\l\h\v\t\r\t\w\w\t\k\o\3\d\6\h\1\j\e\w\u\7\5\3\v\d\s\o\d\8\z\h\4\7\7\q\l\d\t\y\2\k\y\k\6\h\8\x\7\o\m\i\9\j\9\h\n\m\4\b\1\m\p\h\m\o\k\n\j\w\c\3\q\q\o\f\y\h\z\o\w\b\0\7\4\w\v\3\7\z\t\2\b\e\0\w\l\a ]] 00:18:01.201 07:28:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:01.201 07:28:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:01.201 [2024-11-20 07:28:04.895635] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:01.201 [2024-11-20 07:28:04.895769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:18:01.201 [2024-11-20 07:28:05.061663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.460 [2024-11-20 07:28:05.186147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.719  [2024-11-20T07:28:07.086Z] Copying: 512/512 [B] (average 500 kBps) 00:18:03.153 00:18:03.154 07:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7guusyytkfkprs7n4jwga0labr9tsge87v9hrrkbib8erdg7yzlowj01v3szc0s0cjsfniwiixflsl8f33giygt16dc21f2vgm276c65lac19bjvx2456oksfo0jo80gzp9t1ag859xy45wi6n6gi9vm97td8l2q3earu3fta8ifdcat374scp03wmvoxhvl3n0vavt7y1cet67zdgsova3an321kmxzjv64ix3pzj51y2jdwcbsss2zw48vcm5iy4mrp7entkhp9x94xrdz39h4zdm3w94fdtnn453amfek1ahqzfwy5z6ihwyw0k4js43lpypu3eyf8om31b5j6jai1jwjawlzn2ztg31xj1e76x3k460purttd8nadcvshpe94dy01w40wmbc583tz78ukgvlamqlhvtrtwwtko3d6h1jewu753vdsod8zh477qldty2kyk6h8x7omi9j9hnm4b1mphmoknjwc3qqofyhzowb074wv37zt2be0wla == \7\g\u\u\s\y\y\t\k\f\k\p\r\s\7\n\4\j\w\g\a\0\l\a\b\r\9\t\s\g\e\8\7\v\9\h\r\r\k\b\i\b\8\e\r\d\g\7\y\z\l\o\w\j\0\1\v\3\s\z\c\0\s\0\c\j\s\f\n\i\w\i\i\x\f\l\s\l\8\f\3\3\g\i\y\g\t\1\6\d\c\2\1\f\2\v\g\m\2\7\6\c\6\5\l\a\c\1\9\b\j\v\x\2\4\5\6\o\k\s\f\o\0\j\o\8\0\g\z\p\9\t\1\a\g\8\5\9\x\y\4\5\w\i\6\n\6\g\i\9\v\m\9\7\t\d\8\l\2\q\3\e\a\r\u\3\f\t\a\8\i\f\d\c\a\t\3\7\4\s\c\p\0\3\w\m\v\o\x\h\v\l\3\n\0\v\a\v\t\7\y\1\c\e\t\6\7\z\d\g\s\o\v\a\3\a\n\3\2\1\k\m\x\z\j\v\6\4\i\x\3\p\z\j\5\1\y\2\j\d\w\c\b\s\s\s\2\z\w\4\8\v\c\m\5\i\y\4\m\r\p\7\e\n\t\k\h\p\9\x\9\4\x\r\d\z\3\9\h\4\z\d\m\3\w\9\4\f\d\t\n\n\4\5\3\a\m\f\e\k\1\a\h\q\z\f\w\y\5\z\6\i\h\w\y\w\0\k\4\j\s\4\3\l\p\y\p\u\3\e\y\f\8\o\m\3\1\b\5\j\6\j\a\i\1\j\w\j\a\w\l\z\n\2\z\t\g\3\1\x\j\1\e\7\6\x\3\k\4\6\0\p\u\r\t\t\d\8\n\a\d\c\v\s\h\p\e\9\4\d\y\0\1\w\4\0\w\m\b\c\5\8\3\t\z\7\8\u\k\g\v\l\a\m\q\l\h\v\t\r\t\w\w\t\k\o\3\d\6\h\1\j\e\w\u\7\5\3\v\d\s\o\d\8\z\h\4\7\7\q\l\d\t\y\2\k\y\k\6\h\8\x\7\o\m\i\9\j\9\h\n\m\4\b\1\m\p\h\m\o\k\n\j\w\c\3\q\q\o\f\y\h\z\o\w\b\0\7\4\w\v\3\7\z\t\2\b\e\0\w\l\a ]] 00:18:03.154 07:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:03.154 07:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:03.154 [2024-11-20 07:28:06.771882] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:03.154 [2024-11-20 07:28:06.772029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:18:03.154 [2024-11-20 07:28:06.946713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.154 [2024-11-20 07:28:07.068252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.722  [2024-11-20T07:28:09.036Z] Copying: 512/512 [B] (average 125 kBps) 00:18:05.103 00:18:05.103 07:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7guusyytkfkprs7n4jwga0labr9tsge87v9hrrkbib8erdg7yzlowj01v3szc0s0cjsfniwiixflsl8f33giygt16dc21f2vgm276c65lac19bjvx2456oksfo0jo80gzp9t1ag859xy45wi6n6gi9vm97td8l2q3earu3fta8ifdcat374scp03wmvoxhvl3n0vavt7y1cet67zdgsova3an321kmxzjv64ix3pzj51y2jdwcbsss2zw48vcm5iy4mrp7entkhp9x94xrdz39h4zdm3w94fdtnn453amfek1ahqzfwy5z6ihwyw0k4js43lpypu3eyf8om31b5j6jai1jwjawlzn2ztg31xj1e76x3k460purttd8nadcvshpe94dy01w40wmbc583tz78ukgvlamqlhvtrtwwtko3d6h1jewu753vdsod8zh477qldty2kyk6h8x7omi9j9hnm4b1mphmoknjwc3qqofyhzowb074wv37zt2be0wla == \7\g\u\u\s\y\y\t\k\f\k\p\r\s\7\n\4\j\w\g\a\0\l\a\b\r\9\t\s\g\e\8\7\v\9\h\r\r\k\b\i\b\8\e\r\d\g\7\y\z\l\o\w\j\0\1\v\3\s\z\c\0\s\0\c\j\s\f\n\i\w\i\i\x\f\l\s\l\8\f\3\3\g\i\y\g\t\1\6\d\c\2\1\f\2\v\g\m\2\7\6\c\6\5\l\a\c\1\9\b\j\v\x\2\4\5\6\o\k\s\f\o\0\j\o\8\0\g\z\p\9\t\1\a\g\8\5\9\x\y\4\5\w\i\6\n\6\g\i\9\v\m\9\7\t\d\8\l\2\q\3\e\a\r\u\3\f\t\a\8\i\f\d\c\a\t\3\7\4\s\c\p\0\3\w\m\v\o\x\h\v\l\3\n\0\v\a\v\t\7\y\1\c\e\t\6\7\z\d\g\s\o\v\a\3\a\n\3\2\1\k\m\x\z\j\v\6\4\i\x\3\p\z\j\5\1\y\2\j\d\w\c\b\s\s\s\2\z\w\4\8\v\c\m\5\i\y\4\m\r\p\7\e\n\t\k\h\p\9\x\9\4\x\r\d\z\3\9\h\4\z\d\m\3\w\9\4\f\d\t\n\n\4\5\3\a\m\f\e\k\1\a\h\q\z\f\w\y\5\z\6\i\h\w\y\w\0\k\4\j\s\4\3\l\p\y\p\u\3\e\y\f\8\o\m\3\1\b\5\j\6\j\a\i\1\j\w\j\a\w\l\z\n\2\z\t\g\3\1\x\j\1\e\7\6\x\3\k\4\6\0\p\u\r\t\t\d\8\n\a\d\c\v\s\h\p\e\9\4\d\y\0\1\w\4\0\w\m\b\c\5\8\3\t\z\7\8\u\k\g\v\l\a\m\q\l\h\v\t\r\t\w\w\t\k\o\3\d\6\h\1\j\e\w\u\7\5\3\v\d\s\o\d\8\z\h\4\7\7\q\l\d\t\y\2\k\y\k\6\h\8\x\7\o\m\i\9\j\9\h\n\m\4\b\1\m\p\h\m\o\k\n\j\w\c\3\q\q\o\f\y\h\z\o\w\b\0\7\4\w\v\3\7\z\t\2\b\e\0\w\l\a ]] 00:18:05.103 07:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:05.103 07:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:05.103 [2024-11-20 07:28:08.701173] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:05.103 [2024-11-20 07:28:08.701304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75360 ] 00:18:05.103 [2024-11-20 07:28:08.872819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.103 [2024-11-20 07:28:09.005837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.673  [2024-11-20T07:28:10.665Z] Copying: 512/512 [B] (average 100 kBps) 00:18:06.732 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7guusyytkfkprs7n4jwga0labr9tsge87v9hrrkbib8erdg7yzlowj01v3szc0s0cjsfniwiixflsl8f33giygt16dc21f2vgm276c65lac19bjvx2456oksfo0jo80gzp9t1ag859xy45wi6n6gi9vm97td8l2q3earu3fta8ifdcat374scp03wmvoxhvl3n0vavt7y1cet67zdgsova3an321kmxzjv64ix3pzj51y2jdwcbsss2zw48vcm5iy4mrp7entkhp9x94xrdz39h4zdm3w94fdtnn453amfek1ahqzfwy5z6ihwyw0k4js43lpypu3eyf8om31b5j6jai1jwjawlzn2ztg31xj1e76x3k460purttd8nadcvshpe94dy01w40wmbc583tz78ukgvlamqlhvtrtwwtko3d6h1jewu753vdsod8zh477qldty2kyk6h8x7omi9j9hnm4b1mphmoknjwc3qqofyhzowb074wv37zt2be0wla == \7\g\u\u\s\y\y\t\k\f\k\p\r\s\7\n\4\j\w\g\a\0\l\a\b\r\9\t\s\g\e\8\7\v\9\h\r\r\k\b\i\b\8\e\r\d\g\7\y\z\l\o\w\j\0\1\v\3\s\z\c\0\s\0\c\j\s\f\n\i\w\i\i\x\f\l\s\l\8\f\3\3\g\i\y\g\t\1\6\d\c\2\1\f\2\v\g\m\2\7\6\c\6\5\l\a\c\1\9\b\j\v\x\2\4\5\6\o\k\s\f\o\0\j\o\8\0\g\z\p\9\t\1\a\g\8\5\9\x\y\4\5\w\i\6\n\6\g\i\9\v\m\9\7\t\d\8\l\2\q\3\e\a\r\u\3\f\t\a\8\i\f\d\c\a\t\3\7\4\s\c\p\0\3\w\m\v\o\x\h\v\l\3\n\0\v\a\v\t\7\y\1\c\e\t\6\7\z\d\g\s\o\v\a\3\a\n\3\2\1\k\m\x\z\j\v\6\4\i\x\3\p\z\j\5\1\y\2\j\d\w\c\b\s\s\s\2\z\w\4\8\v\c\m\5\i\y\4\m\r\p\7\e\n\t\k\h\p\9\x\9\4\x\r\d\z\3\9\h\4\z\d\m\3\w\9\4\f\d\t\n\n\4\5\3\a\m\f\e\k\1\a\h\q\z\f\w\y\5\z\6\i\h\w\y\w\0\k\4\j\s\4\3\l\p\y\p\u\3\e\y\f\8\o\m\3\1\b\5\j\6\j\a\i\1\j\w\j\a\w\l\z\n\2\z\t\g\3\1\x\j\1\e\7\6\x\3\k\4\6\0\p\u\r\t\t\d\8\n\a\d\c\v\s\h\p\e\9\4\d\y\0\1\w\4\0\w\m\b\c\5\8\3\t\z\7\8\u\k\g\v\l\a\m\q\l\h\v\t\r\t\w\w\t\k\o\3\d\6\h\1\j\e\w\u\7\5\3\v\d\s\o\d\8\z\h\4\7\7\q\l\d\t\y\2\k\y\k\6\h\8\x\7\o\m\i\9\j\9\h\n\m\4\b\1\m\p\h\m\o\k\n\j\w\c\3\q\q\o\f\y\h\z\o\w\b\0\7\4\w\v\3\7\z\t\2\b\e\0\w\l\a ]] 00:18:06.733 00:18:06.733 real 0m15.198s 00:18:06.733 user 0m12.536s 00:18:06.733 sys 0m1.745s 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.733 ************************************ 00:18:06.733 END TEST dd_flags_misc 00:18:06.733 ************************************ 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:18:06.733 * Second test run, disabling liburing, forcing AIO 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:06.733 ************************************ 00:18:06.733 START TEST dd_flag_append_forced_aio 00:18:06.733 ************************************ 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=6mt5ktd0z7865ngrupsir8lbm146dens 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=oz62zayz5sflcsmm694ak86ocxdcgt7n 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 6mt5ktd0z7865ngrupsir8lbm146dens 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s oz62zayz5sflcsmm694ak86ocxdcgt7n 00:18:06.733 07:28:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:06.993 [2024-11-20 07:28:10.709416] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:06.993 [2024-11-20 07:28:10.709585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75399 ] 00:18:06.993 [2024-11-20 07:28:10.886603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.252 [2024-11-20 07:28:11.008524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.512  [2024-11-20T07:28:12.825Z] Copying: 32/32 [B] (average 31 kBps) 00:18:08.892 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ oz62zayz5sflcsmm694ak86ocxdcgt7n6mt5ktd0z7865ngrupsir8lbm146dens == \o\z\6\2\z\a\y\z\5\s\f\l\c\s\m\m\6\9\4\a\k\8\6\o\c\x\d\c\g\t\7\n\6\m\t\5\k\t\d\0\z\7\8\6\5\n\g\r\u\p\s\i\r\8\l\b\m\1\4\6\d\e\n\s ]] 00:18:08.892 00:18:08.892 real 0m1.947s 00:18:08.892 user 0m1.595s 00:18:08.892 sys 0m0.239s 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:08.892 ************************************ 00:18:08.892 END TEST dd_flag_append_forced_aio 00:18:08.892 ************************************ 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:08.892 ************************************ 00:18:08.892 START TEST dd_flag_directory_forced_aio 00:18:08.892 ************************************ 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.892 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.893 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.893 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.893 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.893 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:08.893 07:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:08.893 [2024-11-20 07:28:12.718966] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:08.893 [2024-11-20 07:28:12.719117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75443 ] 00:18:09.153 [2024-11-20 07:28:12.899564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.153 [2024-11-20 07:28:13.040456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.722 [2024-11-20 07:28:13.414843] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:09.722 [2024-11-20 07:28:13.414900] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:09.722 [2024-11-20 07:28:13.414921] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.659 [2024-11-20 07:28:14.417859] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:10.918 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:10.919 07:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:10.919 [2024-11-20 07:28:14.779074] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:10.919 [2024-11-20 07:28:14.779194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75470 ] 00:18:11.176 [2024-11-20 07:28:14.956162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.177 [2024-11-20 07:28:15.077127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.756 [2024-11-20 07:28:15.427655] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:11.756 [2024-11-20 07:28:15.427718] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:11.756 [2024-11-20 07:28:15.427756] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:12.697 [2024-11-20 07:28:16.415837] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.957 00:18:12.957 real 0m4.063s 00:18:12.957 user 0m3.400s 00:18:12.957 sys 0m0.461s 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:12.957 ************************************ 00:18:12.957 END TEST dd_flag_directory_forced_aio 00:18:12.957 ************************************ 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:12.957 ************************************ 00:18:12.957 START TEST dd_flag_nofollow_forced_aio 00:18:12.957 ************************************ 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:18:12.957 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:12.958 07:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:12.958 [2024-11-20 07:28:16.857327] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:12.958 [2024-11-20 07:28:16.857503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75516 ] 00:18:13.217 [2024-11-20 07:28:17.036296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.477 [2024-11-20 07:28:17.170443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.736 [2024-11-20 07:28:17.514970] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:13.736 [2024-11-20 07:28:17.515020] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:13.736 [2024-11-20 07:28:17.515041] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:14.675 [2024-11-20 07:28:18.475160] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:14.936 07:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.936 [2024-11-20 07:28:18.842119] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:14.936 [2024-11-20 07:28:18.842258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 00:18:15.196 [2024-11-20 07:28:19.017060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.454 [2024-11-20 07:28:19.142313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.714 [2024-11-20 07:28:19.512181] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:15.714 [2024-11-20 07:28:19.512239] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:15.714 [2024-11-20 07:28:19.512262] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:16.652 [2024-11-20 07:28:20.512271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:16.911 07:28:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:17.171 [2024-11-20 07:28:20.887004] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:17.171 [2024-11-20 07:28:20.887143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75563 ] 00:18:17.171 [2024-11-20 07:28:21.066194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.430 [2024-11-20 07:28:21.194700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.689  [2024-11-20T07:28:23.021Z] Copying: 512/512 [B] (average 500 kBps) 00:18:19.088 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 64o2h70mtzffq9nksxdvsgk2xczlrhkawdtlxqyqmpov5bevyc0p2ffse7x208fy0cgomic5xsn1p7ai5ngw7wbi9gi6f1n26u86fd08qdc8f96ctl84d2oelxq7euu7kyvekixejs70hwqpkwwlbdf8cmyv7cdqj7g9fttq2a48mgg5vs9p9xgicrxx6ao79ir5qykuae23g90rqh46hcnjks189zogrqeih044s6lxhn5v5ol9h1m292g1tld7l91r80d153vz7qxy6ccxwc9yukgt9kmughllqqo4fq4fyo9qhaaa7xa8vqjmy4bbj7dkw9ejbtswmvhggvfiya68y1m8j2nuchfv4h6rbs86oit6eng6x8opr4acf8678dze7qacg5mayq8si6osxlh5x2b6btz70nkq4kmp5a35wn22foh05avv85novt20pwgswqbwqkzltqi734lt59xgqmq1l95jgwunleayldafhemhx14nrqrlcaw2kdka == \6\4\o\2\h\7\0\m\t\z\f\f\q\9\n\k\s\x\d\v\s\g\k\2\x\c\z\l\r\h\k\a\w\d\t\l\x\q\y\q\m\p\o\v\5\b\e\v\y\c\0\p\2\f\f\s\e\7\x\2\0\8\f\y\0\c\g\o\m\i\c\5\x\s\n\1\p\7\a\i\5\n\g\w\7\w\b\i\9\g\i\6\f\1\n\2\6\u\8\6\f\d\0\8\q\d\c\8\f\9\6\c\t\l\8\4\d\2\o\e\l\x\q\7\e\u\u\7\k\y\v\e\k\i\x\e\j\s\7\0\h\w\q\p\k\w\w\l\b\d\f\8\c\m\y\v\7\c\d\q\j\7\g\9\f\t\t\q\2\a\4\8\m\g\g\5\v\s\9\p\9\x\g\i\c\r\x\x\6\a\o\7\9\i\r\5\q\y\k\u\a\e\2\3\g\9\0\r\q\h\4\6\h\c\n\j\k\s\1\8\9\z\o\g\r\q\e\i\h\0\4\4\s\6\l\x\h\n\5\v\5\o\l\9\h\1\m\2\9\2\g\1\t\l\d\7\l\9\1\r\8\0\d\1\5\3\v\z\7\q\x\y\6\c\c\x\w\c\9\y\u\k\g\t\9\k\m\u\g\h\l\l\q\q\o\4\f\q\4\f\y\o\9\q\h\a\a\a\7\x\a\8\v\q\j\m\y\4\b\b\j\7\d\k\w\9\e\j\b\t\s\w\m\v\h\g\g\v\f\i\y\a\6\8\y\1\m\8\j\2\n\u\c\h\f\v\4\h\6\r\b\s\8\6\o\i\t\6\e\n\g\6\x\8\o\p\r\4\a\c\f\8\6\7\8\d\z\e\7\q\a\c\g\5\m\a\y\q\8\s\i\6\o\s\x\l\h\5\x\2\b\6\b\t\z\7\0\n\k\q\4\k\m\p\5\a\3\5\w\n\2\2\f\o\h\0\5\a\v\v\8\5\n\o\v\t\2\0\p\w\g\s\w\q\b\w\q\k\z\l\t\q\i\7\3\4\l\t\5\9\x\g\q\m\q\1\l\9\5\j\g\w\u\n\l\e\a\y\l\d\a\f\h\e\m\h\x\1\4\n\r\q\r\l\c\a\w\2\k\d\k\a ]] 00:18:19.088 00:18:19.088 real 0m6.035s 00:18:19.088 user 0m5.012s 00:18:19.088 sys 0m0.709s 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:19.088 ************************************ 00:18:19.088 END TEST dd_flag_nofollow_forced_aio 00:18:19.088 ************************************ 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:19.088 ************************************ 00:18:19.088 START TEST dd_flag_noatime_forced_aio 00:18:19.088 ************************************ 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732087701 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732087702 00:18:19.088 07:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:18:20.026 07:28:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:20.286 [2024-11-20 07:28:23.960247] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:20.286 [2024-11-20 07:28:23.960392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75621 ] 00:18:20.286 [2024-11-20 07:28:24.134985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.545 [2024-11-20 07:28:24.261273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.804  [2024-11-20T07:28:26.115Z] Copying: 512/512 [B] (average 500 kBps) 00:18:22.182 00:18:22.182 07:28:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:22.182 07:28:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732087701 )) 00:18:22.182 07:28:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:22.182 07:28:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732087702 )) 00:18:22.182 07:28:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:22.182 [2024-11-20 07:28:25.935851] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:22.182 [2024-11-20 07:28:25.936003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75650 ] 00:18:22.441 [2024-11-20 07:28:26.108456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.441 [2024-11-20 07:28:26.233129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.700  [2024-11-20T07:28:28.029Z] Copying: 512/512 [B] (average 500 kBps) 00:18:24.096 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732087706 )) 00:18:24.096 00:18:24.096 real 0m4.954s 00:18:24.096 user 0m3.255s 00:18:24.096 sys 0m0.476s 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:24.096 ************************************ 00:18:24.096 END TEST dd_flag_noatime_forced_aio 00:18:24.096 ************************************ 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:24.096 ************************************ 00:18:24.096 START TEST dd_flags_misc_forced_aio 00:18:24.096 ************************************ 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:24.096 07:28:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:24.096 [2024-11-20 07:28:27.980255] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:24.096 [2024-11-20 07:28:27.980409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75694 ] 00:18:24.356 [2024-11-20 07:28:28.151997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.616 [2024-11-20 07:28:28.278932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.875  [2024-11-20T07:28:30.189Z] Copying: 512/512 [B] (average 500 kBps) 00:18:26.256 00:18:26.256 07:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hlsu27sispyn555q69mjwef4vgi21b0psobc1pnrovred9fl8p5lu5f6dyiedyfizoehkgywop62z4xa0bhgdz9mffatcmx4ds6tpwzeys7ysi0mi09t3pqx40d8v2aiwzbnbsh279ba4ugk8sr2kia1igjl726lbrv21aaxq3sfamk0sakqj9yo8aqook22p3msrulo9yd33ncx159vsvqau72cpa0953yocmqqxvt2vpup1nboe6cceen8mv4gmcc9kikbgpmynykmyygrwc1o6kk5x9bu4c57rzneoq3hplt9wc0vozurszweecvy1gcd8dn7d6aonis0xmm827rnsqvq3qess8v9t6ftw1kwohbpf06x9d7lqdfehad0198ucmphjfurlj9jnfopaf2j122w0qrwncom33k3ohvz5xkk3b4zqduiju3fsw3b098u492dm5q6nqq2qhy9gh8vdtrlpcr66sb5gbx7dfy8axil9gei0sszhev1dack == \h\l\s\u\2\7\s\i\s\p\y\n\5\5\5\q\6\9\m\j\w\e\f\4\v\g\i\2\1\b\0\p\s\o\b\c\1\p\n\r\o\v\r\e\d\9\f\l\8\p\5\l\u\5\f\6\d\y\i\e\d\y\f\i\z\o\e\h\k\g\y\w\o\p\6\2\z\4\x\a\0\b\h\g\d\z\9\m\f\f\a\t\c\m\x\4\d\s\6\t\p\w\z\e\y\s\7\y\s\i\0\m\i\0\9\t\3\p\q\x\4\0\d\8\v\2\a\i\w\z\b\n\b\s\h\2\7\9\b\a\4\u\g\k\8\s\r\2\k\i\a\1\i\g\j\l\7\2\6\l\b\r\v\2\1\a\a\x\q\3\s\f\a\m\k\0\s\a\k\q\j\9\y\o\8\a\q\o\o\k\2\2\p\3\m\s\r\u\l\o\9\y\d\3\3\n\c\x\1\5\9\v\s\v\q\a\u\7\2\c\p\a\0\9\5\3\y\o\c\m\q\q\x\v\t\2\v\p\u\p\1\n\b\o\e\6\c\c\e\e\n\8\m\v\4\g\m\c\c\9\k\i\k\b\g\p\m\y\n\y\k\m\y\y\g\r\w\c\1\o\6\k\k\5\x\9\b\u\4\c\5\7\r\z\n\e\o\q\3\h\p\l\t\9\w\c\0\v\o\z\u\r\s\z\w\e\e\c\v\y\1\g\c\d\8\d\n\7\d\6\a\o\n\i\s\0\x\m\m\8\2\7\r\n\s\q\v\q\3\q\e\s\s\8\v\9\t\6\f\t\w\1\k\w\o\h\b\p\f\0\6\x\9\d\7\l\q\d\f\e\h\a\d\0\1\9\8\u\c\m\p\h\j\f\u\r\l\j\9\j\n\f\o\p\a\f\2\j\1\2\2\w\0\q\r\w\n\c\o\m\3\3\k\3\o\h\v\z\5\x\k\k\3\b\4\z\q\d\u\i\j\u\3\f\s\w\3\b\0\9\8\u\4\9\2\d\m\5\q\6\n\q\q\2\q\h\y\9\g\h\8\v\d\t\r\l\p\c\r\6\6\s\b\5\g\b\x\7\d\f\y\8\a\x\i\l\9\g\e\i\0\s\s\z\h\e\v\1\d\a\c\k ]] 00:18:26.256 07:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:26.256 07:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:26.256 [2024-11-20 07:28:29.880585] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:26.256 [2024-11-20 07:28:29.880741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75714 ] 00:18:26.256 [2024-11-20 07:28:30.052974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.256 [2024-11-20 07:28:30.174141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.825  [2024-11-20T07:28:32.138Z] Copying: 512/512 [B] (average 500 kBps) 00:18:28.205 00:18:28.206 07:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hlsu27sispyn555q69mjwef4vgi21b0psobc1pnrovred9fl8p5lu5f6dyiedyfizoehkgywop62z4xa0bhgdz9mffatcmx4ds6tpwzeys7ysi0mi09t3pqx40d8v2aiwzbnbsh279ba4ugk8sr2kia1igjl726lbrv21aaxq3sfamk0sakqj9yo8aqook22p3msrulo9yd33ncx159vsvqau72cpa0953yocmqqxvt2vpup1nboe6cceen8mv4gmcc9kikbgpmynykmyygrwc1o6kk5x9bu4c57rzneoq3hplt9wc0vozurszweecvy1gcd8dn7d6aonis0xmm827rnsqvq3qess8v9t6ftw1kwohbpf06x9d7lqdfehad0198ucmphjfurlj9jnfopaf2j122w0qrwncom33k3ohvz5xkk3b4zqduiju3fsw3b098u492dm5q6nqq2qhy9gh8vdtrlpcr66sb5gbx7dfy8axil9gei0sszhev1dack == \h\l\s\u\2\7\s\i\s\p\y\n\5\5\5\q\6\9\m\j\w\e\f\4\v\g\i\2\1\b\0\p\s\o\b\c\1\p\n\r\o\v\r\e\d\9\f\l\8\p\5\l\u\5\f\6\d\y\i\e\d\y\f\i\z\o\e\h\k\g\y\w\o\p\6\2\z\4\x\a\0\b\h\g\d\z\9\m\f\f\a\t\c\m\x\4\d\s\6\t\p\w\z\e\y\s\7\y\s\i\0\m\i\0\9\t\3\p\q\x\4\0\d\8\v\2\a\i\w\z\b\n\b\s\h\2\7\9\b\a\4\u\g\k\8\s\r\2\k\i\a\1\i\g\j\l\7\2\6\l\b\r\v\2\1\a\a\x\q\3\s\f\a\m\k\0\s\a\k\q\j\9\y\o\8\a\q\o\o\k\2\2\p\3\m\s\r\u\l\o\9\y\d\3\3\n\c\x\1\5\9\v\s\v\q\a\u\7\2\c\p\a\0\9\5\3\y\o\c\m\q\q\x\v\t\2\v\p\u\p\1\n\b\o\e\6\c\c\e\e\n\8\m\v\4\g\m\c\c\9\k\i\k\b\g\p\m\y\n\y\k\m\y\y\g\r\w\c\1\o\6\k\k\5\x\9\b\u\4\c\5\7\r\z\n\e\o\q\3\h\p\l\t\9\w\c\0\v\o\z\u\r\s\z\w\e\e\c\v\y\1\g\c\d\8\d\n\7\d\6\a\o\n\i\s\0\x\m\m\8\2\7\r\n\s\q\v\q\3\q\e\s\s\8\v\9\t\6\f\t\w\1\k\w\o\h\b\p\f\0\6\x\9\d\7\l\q\d\f\e\h\a\d\0\1\9\8\u\c\m\p\h\j\f\u\r\l\j\9\j\n\f\o\p\a\f\2\j\1\2\2\w\0\q\r\w\n\c\o\m\3\3\k\3\o\h\v\z\5\x\k\k\3\b\4\z\q\d\u\i\j\u\3\f\s\w\3\b\0\9\8\u\4\9\2\d\m\5\q\6\n\q\q\2\q\h\y\9\g\h\8\v\d\t\r\l\p\c\r\6\6\s\b\5\g\b\x\7\d\f\y\8\a\x\i\l\9\g\e\i\0\s\s\z\h\e\v\1\d\a\c\k ]] 00:18:28.206 07:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:28.206 07:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:28.206 [2024-11-20 07:28:31.782643] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:28.206 [2024-11-20 07:28:31.782803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75739 ] 00:18:28.206 [2024-11-20 07:28:31.954834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.206 [2024-11-20 07:28:32.076908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.781  [2024-11-20T07:28:33.660Z] Copying: 512/512 [B] (average 71 kBps) 00:18:29.727 00:18:29.987 07:28:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hlsu27sispyn555q69mjwef4vgi21b0psobc1pnrovred9fl8p5lu5f6dyiedyfizoehkgywop62z4xa0bhgdz9mffatcmx4ds6tpwzeys7ysi0mi09t3pqx40d8v2aiwzbnbsh279ba4ugk8sr2kia1igjl726lbrv21aaxq3sfamk0sakqj9yo8aqook22p3msrulo9yd33ncx159vsvqau72cpa0953yocmqqxvt2vpup1nboe6cceen8mv4gmcc9kikbgpmynykmyygrwc1o6kk5x9bu4c57rzneoq3hplt9wc0vozurszweecvy1gcd8dn7d6aonis0xmm827rnsqvq3qess8v9t6ftw1kwohbpf06x9d7lqdfehad0198ucmphjfurlj9jnfopaf2j122w0qrwncom33k3ohvz5xkk3b4zqduiju3fsw3b098u492dm5q6nqq2qhy9gh8vdtrlpcr66sb5gbx7dfy8axil9gei0sszhev1dack == \h\l\s\u\2\7\s\i\s\p\y\n\5\5\5\q\6\9\m\j\w\e\f\4\v\g\i\2\1\b\0\p\s\o\b\c\1\p\n\r\o\v\r\e\d\9\f\l\8\p\5\l\u\5\f\6\d\y\i\e\d\y\f\i\z\o\e\h\k\g\y\w\o\p\6\2\z\4\x\a\0\b\h\g\d\z\9\m\f\f\a\t\c\m\x\4\d\s\6\t\p\w\z\e\y\s\7\y\s\i\0\m\i\0\9\t\3\p\q\x\4\0\d\8\v\2\a\i\w\z\b\n\b\s\h\2\7\9\b\a\4\u\g\k\8\s\r\2\k\i\a\1\i\g\j\l\7\2\6\l\b\r\v\2\1\a\a\x\q\3\s\f\a\m\k\0\s\a\k\q\j\9\y\o\8\a\q\o\o\k\2\2\p\3\m\s\r\u\l\o\9\y\d\3\3\n\c\x\1\5\9\v\s\v\q\a\u\7\2\c\p\a\0\9\5\3\y\o\c\m\q\q\x\v\t\2\v\p\u\p\1\n\b\o\e\6\c\c\e\e\n\8\m\v\4\g\m\c\c\9\k\i\k\b\g\p\m\y\n\y\k\m\y\y\g\r\w\c\1\o\6\k\k\5\x\9\b\u\4\c\5\7\r\z\n\e\o\q\3\h\p\l\t\9\w\c\0\v\o\z\u\r\s\z\w\e\e\c\v\y\1\g\c\d\8\d\n\7\d\6\a\o\n\i\s\0\x\m\m\8\2\7\r\n\s\q\v\q\3\q\e\s\s\8\v\9\t\6\f\t\w\1\k\w\o\h\b\p\f\0\6\x\9\d\7\l\q\d\f\e\h\a\d\0\1\9\8\u\c\m\p\h\j\f\u\r\l\j\9\j\n\f\o\p\a\f\2\j\1\2\2\w\0\q\r\w\n\c\o\m\3\3\k\3\o\h\v\z\5\x\k\k\3\b\4\z\q\d\u\i\j\u\3\f\s\w\3\b\0\9\8\u\4\9\2\d\m\5\q\6\n\q\q\2\q\h\y\9\g\h\8\v\d\t\r\l\p\c\r\6\6\s\b\5\g\b\x\7\d\f\y\8\a\x\i\l\9\g\e\i\0\s\s\z\h\e\v\1\d\a\c\k ]] 00:18:29.987 07:28:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:29.987 07:28:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:29.987 [2024-11-20 07:28:33.723809] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:29.987 [2024-11-20 07:28:33.723933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75764 ] 00:18:29.987 [2024-11-20 07:28:33.896775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.246 [2024-11-20 07:28:34.015532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.505  [2024-11-20T07:28:35.820Z] Copying: 512/512 [B] (average 125 kBps) 00:18:31.887 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hlsu27sispyn555q69mjwef4vgi21b0psobc1pnrovred9fl8p5lu5f6dyiedyfizoehkgywop62z4xa0bhgdz9mffatcmx4ds6tpwzeys7ysi0mi09t3pqx40d8v2aiwzbnbsh279ba4ugk8sr2kia1igjl726lbrv21aaxq3sfamk0sakqj9yo8aqook22p3msrulo9yd33ncx159vsvqau72cpa0953yocmqqxvt2vpup1nboe6cceen8mv4gmcc9kikbgpmynykmyygrwc1o6kk5x9bu4c57rzneoq3hplt9wc0vozurszweecvy1gcd8dn7d6aonis0xmm827rnsqvq3qess8v9t6ftw1kwohbpf06x9d7lqdfehad0198ucmphjfurlj9jnfopaf2j122w0qrwncom33k3ohvz5xkk3b4zqduiju3fsw3b098u492dm5q6nqq2qhy9gh8vdtrlpcr66sb5gbx7dfy8axil9gei0sszhev1dack == \h\l\s\u\2\7\s\i\s\p\y\n\5\5\5\q\6\9\m\j\w\e\f\4\v\g\i\2\1\b\0\p\s\o\b\c\1\p\n\r\o\v\r\e\d\9\f\l\8\p\5\l\u\5\f\6\d\y\i\e\d\y\f\i\z\o\e\h\k\g\y\w\o\p\6\2\z\4\x\a\0\b\h\g\d\z\9\m\f\f\a\t\c\m\x\4\d\s\6\t\p\w\z\e\y\s\7\y\s\i\0\m\i\0\9\t\3\p\q\x\4\0\d\8\v\2\a\i\w\z\b\n\b\s\h\2\7\9\b\a\4\u\g\k\8\s\r\2\k\i\a\1\i\g\j\l\7\2\6\l\b\r\v\2\1\a\a\x\q\3\s\f\a\m\k\0\s\a\k\q\j\9\y\o\8\a\q\o\o\k\2\2\p\3\m\s\r\u\l\o\9\y\d\3\3\n\c\x\1\5\9\v\s\v\q\a\u\7\2\c\p\a\0\9\5\3\y\o\c\m\q\q\x\v\t\2\v\p\u\p\1\n\b\o\e\6\c\c\e\e\n\8\m\v\4\g\m\c\c\9\k\i\k\b\g\p\m\y\n\y\k\m\y\y\g\r\w\c\1\o\6\k\k\5\x\9\b\u\4\c\5\7\r\z\n\e\o\q\3\h\p\l\t\9\w\c\0\v\o\z\u\r\s\z\w\e\e\c\v\y\1\g\c\d\8\d\n\7\d\6\a\o\n\i\s\0\x\m\m\8\2\7\r\n\s\q\v\q\3\q\e\s\s\8\v\9\t\6\f\t\w\1\k\w\o\h\b\p\f\0\6\x\9\d\7\l\q\d\f\e\h\a\d\0\1\9\8\u\c\m\p\h\j\f\u\r\l\j\9\j\n\f\o\p\a\f\2\j\1\2\2\w\0\q\r\w\n\c\o\m\3\3\k\3\o\h\v\z\5\x\k\k\3\b\4\z\q\d\u\i\j\u\3\f\s\w\3\b\0\9\8\u\4\9\2\d\m\5\q\6\n\q\q\2\q\h\y\9\g\h\8\v\d\t\r\l\p\c\r\6\6\s\b\5\g\b\x\7\d\f\y\8\a\x\i\l\9\g\e\i\0\s\s\z\h\e\v\1\d\a\c\k ]] 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.887 07:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:31.887 [2024-11-20 07:28:35.573201] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:31.887 [2024-11-20 07:28:35.573337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75780 ] 00:18:31.887 [2024-11-20 07:28:35.746580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.147 [2024-11-20 07:28:35.862854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.406  [2024-11-20T07:28:37.719Z] Copying: 512/512 [B] (average 500 kBps) 00:18:33.786 00:18:33.786 07:28:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rnbgcyntzynemlovnve1ssduzl2yr2xr4eq4k8jy8f3ed6q4isydwsysajhj6hvlikkoqyrh5wev8ep3bxdcsormjfaz1eep424pxgpiaagak66tilso06snd0au3j3t9lis7jbxc775rwbbs0jyrxlhii094l1h6cc7xw3o2zn165ykq3hh6ew5rdhma2rhs76ictkvs5app09x7t5khkl7h8r2pvo0380sc4mqscs4m0jpa3wef4ix355d7s1sudc3bgbsje135dflpjx0pst3o8kl92oiyvnz3tjcqy332t5tp988wffo8ioleqgp7usshv6rj3z81zzce5dfc0qy6wabip44dx9tmnkanax3ipwsap4mk646ji0usi8sky3tnqaofji4y1g4z1jqclxew48z9af0msug5avn3wvs0bukaxcxn4o69ntvaveckvgkb0k7ik7uiwukztl5yj7ee9peo8mghwnmi9erp52m6gr6v5te060ruhh6qkca == \r\n\b\g\c\y\n\t\z\y\n\e\m\l\o\v\n\v\e\1\s\s\d\u\z\l\2\y\r\2\x\r\4\e\q\4\k\8\j\y\8\f\3\e\d\6\q\4\i\s\y\d\w\s\y\s\a\j\h\j\6\h\v\l\i\k\k\o\q\y\r\h\5\w\e\v\8\e\p\3\b\x\d\c\s\o\r\m\j\f\a\z\1\e\e\p\4\2\4\p\x\g\p\i\a\a\g\a\k\6\6\t\i\l\s\o\0\6\s\n\d\0\a\u\3\j\3\t\9\l\i\s\7\j\b\x\c\7\7\5\r\w\b\b\s\0\j\y\r\x\l\h\i\i\0\9\4\l\1\h\6\c\c\7\x\w\3\o\2\z\n\1\6\5\y\k\q\3\h\h\6\e\w\5\r\d\h\m\a\2\r\h\s\7\6\i\c\t\k\v\s\5\a\p\p\0\9\x\7\t\5\k\h\k\l\7\h\8\r\2\p\v\o\0\3\8\0\s\c\4\m\q\s\c\s\4\m\0\j\p\a\3\w\e\f\4\i\x\3\5\5\d\7\s\1\s\u\d\c\3\b\g\b\s\j\e\1\3\5\d\f\l\p\j\x\0\p\s\t\3\o\8\k\l\9\2\o\i\y\v\n\z\3\t\j\c\q\y\3\3\2\t\5\t\p\9\8\8\w\f\f\o\8\i\o\l\e\q\g\p\7\u\s\s\h\v\6\r\j\3\z\8\1\z\z\c\e\5\d\f\c\0\q\y\6\w\a\b\i\p\4\4\d\x\9\t\m\n\k\a\n\a\x\3\i\p\w\s\a\p\4\m\k\6\4\6\j\i\0\u\s\i\8\s\k\y\3\t\n\q\a\o\f\j\i\4\y\1\g\4\z\1\j\q\c\l\x\e\w\4\8\z\9\a\f\0\m\s\u\g\5\a\v\n\3\w\v\s\0\b\u\k\a\x\c\x\n\4\o\6\9\n\t\v\a\v\e\c\k\v\g\k\b\0\k\7\i\k\7\u\i\w\u\k\z\t\l\5\y\j\7\e\e\9\p\e\o\8\m\g\h\w\n\m\i\9\e\r\p\5\2\m\6\g\r\6\v\5\t\e\0\6\0\r\u\h\h\6\q\k\c\a ]] 00:18:33.786 07:28:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:33.786 07:28:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:33.786 [2024-11-20 07:28:37.422619] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:33.786 [2024-11-20 07:28:37.422763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75803 ] 00:18:33.786 [2024-11-20 07:28:37.596985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.045 [2024-11-20 07:28:37.717309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.305  [2024-11-20T07:28:39.670Z] Copying: 512/512 [B] (average 500 kBps) 00:18:35.737 00:18:35.737 07:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rnbgcyntzynemlovnve1ssduzl2yr2xr4eq4k8jy8f3ed6q4isydwsysajhj6hvlikkoqyrh5wev8ep3bxdcsormjfaz1eep424pxgpiaagak66tilso06snd0au3j3t9lis7jbxc775rwbbs0jyrxlhii094l1h6cc7xw3o2zn165ykq3hh6ew5rdhma2rhs76ictkvs5app09x7t5khkl7h8r2pvo0380sc4mqscs4m0jpa3wef4ix355d7s1sudc3bgbsje135dflpjx0pst3o8kl92oiyvnz3tjcqy332t5tp988wffo8ioleqgp7usshv6rj3z81zzce5dfc0qy6wabip44dx9tmnkanax3ipwsap4mk646ji0usi8sky3tnqaofji4y1g4z1jqclxew48z9af0msug5avn3wvs0bukaxcxn4o69ntvaveckvgkb0k7ik7uiwukztl5yj7ee9peo8mghwnmi9erp52m6gr6v5te060ruhh6qkca == \r\n\b\g\c\y\n\t\z\y\n\e\m\l\o\v\n\v\e\1\s\s\d\u\z\l\2\y\r\2\x\r\4\e\q\4\k\8\j\y\8\f\3\e\d\6\q\4\i\s\y\d\w\s\y\s\a\j\h\j\6\h\v\l\i\k\k\o\q\y\r\h\5\w\e\v\8\e\p\3\b\x\d\c\s\o\r\m\j\f\a\z\1\e\e\p\4\2\4\p\x\g\p\i\a\a\g\a\k\6\6\t\i\l\s\o\0\6\s\n\d\0\a\u\3\j\3\t\9\l\i\s\7\j\b\x\c\7\7\5\r\w\b\b\s\0\j\y\r\x\l\h\i\i\0\9\4\l\1\h\6\c\c\7\x\w\3\o\2\z\n\1\6\5\y\k\q\3\h\h\6\e\w\5\r\d\h\m\a\2\r\h\s\7\6\i\c\t\k\v\s\5\a\p\p\0\9\x\7\t\5\k\h\k\l\7\h\8\r\2\p\v\o\0\3\8\0\s\c\4\m\q\s\c\s\4\m\0\j\p\a\3\w\e\f\4\i\x\3\5\5\d\7\s\1\s\u\d\c\3\b\g\b\s\j\e\1\3\5\d\f\l\p\j\x\0\p\s\t\3\o\8\k\l\9\2\o\i\y\v\n\z\3\t\j\c\q\y\3\3\2\t\5\t\p\9\8\8\w\f\f\o\8\i\o\l\e\q\g\p\7\u\s\s\h\v\6\r\j\3\z\8\1\z\z\c\e\5\d\f\c\0\q\y\6\w\a\b\i\p\4\4\d\x\9\t\m\n\k\a\n\a\x\3\i\p\w\s\a\p\4\m\k\6\4\6\j\i\0\u\s\i\8\s\k\y\3\t\n\q\a\o\f\j\i\4\y\1\g\4\z\1\j\q\c\l\x\e\w\4\8\z\9\a\f\0\m\s\u\g\5\a\v\n\3\w\v\s\0\b\u\k\a\x\c\x\n\4\o\6\9\n\t\v\a\v\e\c\k\v\g\k\b\0\k\7\i\k\7\u\i\w\u\k\z\t\l\5\y\j\7\e\e\9\p\e\o\8\m\g\h\w\n\m\i\9\e\r\p\5\2\m\6\g\r\6\v\5\t\e\0\6\0\r\u\h\h\6\q\k\c\a ]] 00:18:35.737 07:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:35.737 07:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:35.737 [2024-11-20 07:28:39.308430] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:35.737 [2024-11-20 07:28:39.308561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75828 ] 00:18:35.737 [2024-11-20 07:28:39.483023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.737 [2024-11-20 07:28:39.601978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.306  [2024-11-20T07:28:41.178Z] Copying: 512/512 [B] (average 125 kBps) 00:18:37.245 00:18:37.245 07:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rnbgcyntzynemlovnve1ssduzl2yr2xr4eq4k8jy8f3ed6q4isydwsysajhj6hvlikkoqyrh5wev8ep3bxdcsormjfaz1eep424pxgpiaagak66tilso06snd0au3j3t9lis7jbxc775rwbbs0jyrxlhii094l1h6cc7xw3o2zn165ykq3hh6ew5rdhma2rhs76ictkvs5app09x7t5khkl7h8r2pvo0380sc4mqscs4m0jpa3wef4ix355d7s1sudc3bgbsje135dflpjx0pst3o8kl92oiyvnz3tjcqy332t5tp988wffo8ioleqgp7usshv6rj3z81zzce5dfc0qy6wabip44dx9tmnkanax3ipwsap4mk646ji0usi8sky3tnqaofji4y1g4z1jqclxew48z9af0msug5avn3wvs0bukaxcxn4o69ntvaveckvgkb0k7ik7uiwukztl5yj7ee9peo8mghwnmi9erp52m6gr6v5te060ruhh6qkca == \r\n\b\g\c\y\n\t\z\y\n\e\m\l\o\v\n\v\e\1\s\s\d\u\z\l\2\y\r\2\x\r\4\e\q\4\k\8\j\y\8\f\3\e\d\6\q\4\i\s\y\d\w\s\y\s\a\j\h\j\6\h\v\l\i\k\k\o\q\y\r\h\5\w\e\v\8\e\p\3\b\x\d\c\s\o\r\m\j\f\a\z\1\e\e\p\4\2\4\p\x\g\p\i\a\a\g\a\k\6\6\t\i\l\s\o\0\6\s\n\d\0\a\u\3\j\3\t\9\l\i\s\7\j\b\x\c\7\7\5\r\w\b\b\s\0\j\y\r\x\l\h\i\i\0\9\4\l\1\h\6\c\c\7\x\w\3\o\2\z\n\1\6\5\y\k\q\3\h\h\6\e\w\5\r\d\h\m\a\2\r\h\s\7\6\i\c\t\k\v\s\5\a\p\p\0\9\x\7\t\5\k\h\k\l\7\h\8\r\2\p\v\o\0\3\8\0\s\c\4\m\q\s\c\s\4\m\0\j\p\a\3\w\e\f\4\i\x\3\5\5\d\7\s\1\s\u\d\c\3\b\g\b\s\j\e\1\3\5\d\f\l\p\j\x\0\p\s\t\3\o\8\k\l\9\2\o\i\y\v\n\z\3\t\j\c\q\y\3\3\2\t\5\t\p\9\8\8\w\f\f\o\8\i\o\l\e\q\g\p\7\u\s\s\h\v\6\r\j\3\z\8\1\z\z\c\e\5\d\f\c\0\q\y\6\w\a\b\i\p\4\4\d\x\9\t\m\n\k\a\n\a\x\3\i\p\w\s\a\p\4\m\k\6\4\6\j\i\0\u\s\i\8\s\k\y\3\t\n\q\a\o\f\j\i\4\y\1\g\4\z\1\j\q\c\l\x\e\w\4\8\z\9\a\f\0\m\s\u\g\5\a\v\n\3\w\v\s\0\b\u\k\a\x\c\x\n\4\o\6\9\n\t\v\a\v\e\c\k\v\g\k\b\0\k\7\i\k\7\u\i\w\u\k\z\t\l\5\y\j\7\e\e\9\p\e\o\8\m\g\h\w\n\m\i\9\e\r\p\5\2\m\6\g\r\6\v\5\t\e\0\6\0\r\u\h\h\6\q\k\c\a ]] 00:18:37.245 07:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:37.245 07:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:37.504 [2024-11-20 07:28:41.195466] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:37.504 [2024-11-20 07:28:41.195618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75852 ] 00:18:37.504 [2024-11-20 07:28:41.372767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.764 [2024-11-20 07:28:41.496607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.023  [2024-11-20T07:28:43.337Z] Copying: 512/512 [B] (average 125 kBps) 00:18:39.404 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rnbgcyntzynemlovnve1ssduzl2yr2xr4eq4k8jy8f3ed6q4isydwsysajhj6hvlikkoqyrh5wev8ep3bxdcsormjfaz1eep424pxgpiaagak66tilso06snd0au3j3t9lis7jbxc775rwbbs0jyrxlhii094l1h6cc7xw3o2zn165ykq3hh6ew5rdhma2rhs76ictkvs5app09x7t5khkl7h8r2pvo0380sc4mqscs4m0jpa3wef4ix355d7s1sudc3bgbsje135dflpjx0pst3o8kl92oiyvnz3tjcqy332t5tp988wffo8ioleqgp7usshv6rj3z81zzce5dfc0qy6wabip44dx9tmnkanax3ipwsap4mk646ji0usi8sky3tnqaofji4y1g4z1jqclxew48z9af0msug5avn3wvs0bukaxcxn4o69ntvaveckvgkb0k7ik7uiwukztl5yj7ee9peo8mghwnmi9erp52m6gr6v5te060ruhh6qkca == \r\n\b\g\c\y\n\t\z\y\n\e\m\l\o\v\n\v\e\1\s\s\d\u\z\l\2\y\r\2\x\r\4\e\q\4\k\8\j\y\8\f\3\e\d\6\q\4\i\s\y\d\w\s\y\s\a\j\h\j\6\h\v\l\i\k\k\o\q\y\r\h\5\w\e\v\8\e\p\3\b\x\d\c\s\o\r\m\j\f\a\z\1\e\e\p\4\2\4\p\x\g\p\i\a\a\g\a\k\6\6\t\i\l\s\o\0\6\s\n\d\0\a\u\3\j\3\t\9\l\i\s\7\j\b\x\c\7\7\5\r\w\b\b\s\0\j\y\r\x\l\h\i\i\0\9\4\l\1\h\6\c\c\7\x\w\3\o\2\z\n\1\6\5\y\k\q\3\h\h\6\e\w\5\r\d\h\m\a\2\r\h\s\7\6\i\c\t\k\v\s\5\a\p\p\0\9\x\7\t\5\k\h\k\l\7\h\8\r\2\p\v\o\0\3\8\0\s\c\4\m\q\s\c\s\4\m\0\j\p\a\3\w\e\f\4\i\x\3\5\5\d\7\s\1\s\u\d\c\3\b\g\b\s\j\e\1\3\5\d\f\l\p\j\x\0\p\s\t\3\o\8\k\l\9\2\o\i\y\v\n\z\3\t\j\c\q\y\3\3\2\t\5\t\p\9\8\8\w\f\f\o\8\i\o\l\e\q\g\p\7\u\s\s\h\v\6\r\j\3\z\8\1\z\z\c\e\5\d\f\c\0\q\y\6\w\a\b\i\p\4\4\d\x\9\t\m\n\k\a\n\a\x\3\i\p\w\s\a\p\4\m\k\6\4\6\j\i\0\u\s\i\8\s\k\y\3\t\n\q\a\o\f\j\i\4\y\1\g\4\z\1\j\q\c\l\x\e\w\4\8\z\9\a\f\0\m\s\u\g\5\a\v\n\3\w\v\s\0\b\u\k\a\x\c\x\n\4\o\6\9\n\t\v\a\v\e\c\k\v\g\k\b\0\k\7\i\k\7\u\i\w\u\k\z\t\l\5\y\j\7\e\e\9\p\e\o\8\m\g\h\w\n\m\i\9\e\r\p\5\2\m\6\g\r\6\v\5\t\e\0\6\0\r\u\h\h\6\q\k\c\a ]] 00:18:39.404 00:18:39.404 real 0m15.107s 00:18:39.404 user 0m12.433s 00:18:39.404 sys 0m1.733s 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.404 ************************************ 00:18:39.404 END TEST dd_flags_misc_forced_aio 00:18:39.404 ************************************ 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:39.404 00:18:39.404 real 1m4.417s 00:18:39.404 user 0m51.071s 00:18:39.404 sys 0m7.828s 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.404 07:28:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:39.404 ************************************ 00:18:39.404 END TEST spdk_dd_posix 00:18:39.404 ************************************ 00:18:39.404 07:28:43 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:39.404 07:28:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.404 07:28:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.404 07:28:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:39.404 ************************************ 00:18:39.404 START TEST spdk_dd_malloc 00:18:39.404 ************************************ 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:39.404 * Looking for test storage... 00:18:39.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:18:39.404 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:39.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.665 --rc genhtml_branch_coverage=1 00:18:39.665 --rc genhtml_function_coverage=1 00:18:39.665 --rc genhtml_legend=1 00:18:39.665 --rc geninfo_all_blocks=1 00:18:39.665 --rc geninfo_unexecuted_blocks=1 00:18:39.665 00:18:39.665 ' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:39.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.665 --rc genhtml_branch_coverage=1 00:18:39.665 --rc genhtml_function_coverage=1 00:18:39.665 --rc genhtml_legend=1 00:18:39.665 --rc geninfo_all_blocks=1 00:18:39.665 --rc geninfo_unexecuted_blocks=1 00:18:39.665 00:18:39.665 ' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:39.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.665 --rc genhtml_branch_coverage=1 00:18:39.665 --rc genhtml_function_coverage=1 00:18:39.665 --rc genhtml_legend=1 00:18:39.665 --rc geninfo_all_blocks=1 00:18:39.665 --rc geninfo_unexecuted_blocks=1 00:18:39.665 00:18:39.665 ' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:39.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.665 --rc genhtml_branch_coverage=1 00:18:39.665 --rc genhtml_function_coverage=1 00:18:39.665 --rc genhtml_legend=1 00:18:39.665 --rc geninfo_all_blocks=1 00:18:39.665 --rc geninfo_unexecuted_blocks=1 00:18:39.665 00:18:39.665 ' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # export PATH 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:18:39.665 ************************************ 00:18:39.665 START TEST dd_malloc_copy 00:18:39.665 ************************************ 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:18:39.665 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:18:39.666 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:18:39.666 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:18:39.666 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:39.666 07:28:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:39.666 { 00:18:39.666 "subsystems": [ 00:18:39.666 { 00:18:39.666 "subsystem": "bdev", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "params": { 00:18:39.666 "block_size": 512, 00:18:39.666 "num_blocks": 1048576, 00:18:39.666 "name": "malloc0" 00:18:39.666 }, 00:18:39.666 "method": "bdev_malloc_create" 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "params": { 00:18:39.666 "block_size": 512, 00:18:39.666 "num_blocks": 1048576, 00:18:39.666 "name": "malloc1" 00:18:39.666 }, 00:18:39.666 "method": "bdev_malloc_create" 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "method": "bdev_wait_for_examine" 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 } 00:18:39.666 [2024-11-20 07:28:43.414164] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:39.666 [2024-11-20 07:28:43.414307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75943 ] 00:18:39.926 [2024-11-20 07:28:43.587145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.926 [2024-11-20 07:28:43.708476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.465  [2024-11-20T07:28:47.336Z] Copying: 216/512 [MB] (216 MBps) [2024-11-20T07:28:47.593Z] Copying: 427/512 [MB] (211 MBps) [2024-11-20T07:28:52.868Z] Copying: 512/512 [MB] (average 216 MBps) 00:18:48.935 00:18:48.935 07:28:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:18:48.935 07:28:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:18:48.935 07:28:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:48.935 07:28:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:48.935 { 00:18:48.935 "subsystems": [ 00:18:48.935 { 00:18:48.935 "subsystem": "bdev", 00:18:48.935 "config": [ 00:18:48.935 { 00:18:48.935 "params": { 00:18:48.935 "block_size": 512, 00:18:48.935 "num_blocks": 1048576, 00:18:48.935 "name": "malloc0" 00:18:48.935 }, 00:18:48.935 "method": "bdev_malloc_create" 00:18:48.935 }, 00:18:48.935 { 00:18:48.935 "params": { 00:18:48.935 "block_size": 512, 00:18:48.935 "num_blocks": 1048576, 00:18:48.935 "name": "malloc1" 00:18:48.935 }, 00:18:48.935 "method": "bdev_malloc_create" 00:18:48.935 }, 00:18:48.935 { 00:18:48.935 "method": "bdev_wait_for_examine" 00:18:48.935 } 00:18:48.935 ] 00:18:48.935 } 00:18:48.935 ] 00:18:48.935 } 00:18:48.935 [2024-11-20 07:28:51.983081] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:48.935 [2024-11-20 07:28:51.983305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76040 ] 00:18:48.935 [2024-11-20 07:28:52.163490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.935 [2024-11-20 07:28:52.297454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.842  [2024-11-20T07:28:56.159Z] Copying: 217/512 [MB] (217 MBps) [2024-11-20T07:28:56.159Z] Copying: 441/512 [MB] (223 MBps) [2024-11-20T07:29:00.356Z] Copying: 512/512 [MB] (average 220 MBps) 00:18:56.423 00:18:56.423 00:18:56.423 real 0m16.920s 00:18:56.423 user 0m15.514s 00:18:56.423 sys 0m1.231s 00:18:56.424 07:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.424 07:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:56.424 ************************************ 00:18:56.424 END TEST dd_malloc_copy 00:18:56.424 ************************************ 00:18:56.424 00:18:56.424 real 0m17.200s 00:18:56.424 user 0m15.655s 00:18:56.424 sys 0m1.395s 00:18:56.424 07:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.424 07:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:18:56.424 ************************************ 00:18:56.424 END TEST spdk_dd_malloc 00:18:56.424 ************************************ 00:18:56.684 07:29:00 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:18:56.684 07:29:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.684 07:29:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.684 07:29:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:56.684 ************************************ 00:18:56.684 START TEST spdk_dd_bdev_to_bdev 00:18:56.684 ************************************ 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:18:56.684 * Looking for test storage... 00:18:56.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.684 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:56.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.685 --rc genhtml_branch_coverage=1 00:18:56.685 --rc genhtml_function_coverage=1 00:18:56.685 --rc genhtml_legend=1 00:18:56.685 --rc geninfo_all_blocks=1 00:18:56.685 --rc geninfo_unexecuted_blocks=1 00:18:56.685 00:18:56.685 ' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:56.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.685 --rc genhtml_branch_coverage=1 00:18:56.685 --rc genhtml_function_coverage=1 00:18:56.685 --rc genhtml_legend=1 00:18:56.685 --rc geninfo_all_blocks=1 00:18:56.685 --rc geninfo_unexecuted_blocks=1 00:18:56.685 00:18:56.685 ' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:56.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.685 --rc genhtml_branch_coverage=1 00:18:56.685 --rc genhtml_function_coverage=1 00:18:56.685 --rc genhtml_legend=1 00:18:56.685 --rc geninfo_all_blocks=1 00:18:56.685 --rc geninfo_unexecuted_blocks=1 00:18:56.685 00:18:56.685 ' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:56.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.685 --rc genhtml_branch_coverage=1 00:18:56.685 --rc genhtml_function_coverage=1 00:18:56.685 --rc genhtml_legend=1 00:18:56.685 --rc geninfo_all_blocks=1 00:18:56.685 --rc geninfo_unexecuted_blocks=1 00:18:56.685 00:18:56.685 ' 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # export PATH 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:18:56.685 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:18:56.945 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:56.945 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:18:56.945 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:18:56.945 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:18:56.945 07:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:18:56.945 [2024-11-20 07:29:00.666373] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:56.945 [2024-11-20 07:29:00.666515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76211 ] 00:18:56.945 [2024-11-20 07:29:00.843038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.205 [2024-11-20 07:29:00.960623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.785  [2024-11-20T07:29:02.674Z] Copying: 256/256 [MB] (average 1662 MBps) 00:18:58.741 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:58.741 ************************************ 00:18:58.741 START TEST dd_inflate_file 00:18:58.741 ************************************ 00:18:58.741 07:29:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:58.741 [2024-11-20 07:29:02.656279] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:18:58.741 [2024-11-20 07:29:02.656403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76237 ] 00:18:59.001 [2024-11-20 07:29:02.812164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.261 [2024-11-20 07:29:02.931946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.521  [2024-11-20T07:29:04.836Z] Copying: 64/64 [MB] (average 1523 MBps) 00:19:00.903 00:19:00.903 ************************************ 00:19:00.903 END TEST dd_inflate_file 00:19:00.903 ************************************ 00:19:00.903 00:19:00.903 real 0m1.878s 00:19:00.903 user 0m1.517s 00:19:00.903 sys 0m0.249s 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:00.903 ************************************ 00:19:00.903 START TEST dd_copy_to_out_bdev 00:19:00.903 ************************************ 00:19:00.903 07:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:00.903 { 00:19:00.903 "subsystems": [ 00:19:00.903 { 00:19:00.903 "subsystem": "bdev", 00:19:00.903 "config": [ 00:19:00.903 { 00:19:00.903 "params": { 00:19:00.903 "block_size": 4096, 00:19:00.903 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:00.903 "name": "aio1" 00:19:00.903 }, 00:19:00.903 "method": "bdev_aio_create" 00:19:00.903 }, 00:19:00.903 { 00:19:00.903 "params": { 00:19:00.903 "trtype": "pcie", 00:19:00.903 "traddr": "0000:00:10.0", 00:19:00.903 "name": "Nvme0" 00:19:00.903 }, 00:19:00.903 "method": "bdev_nvme_attach_controller" 00:19:00.903 }, 00:19:00.903 { 00:19:00.903 "method": "bdev_wait_for_examine" 00:19:00.903 } 00:19:00.903 ] 00:19:00.903 } 00:19:00.903 ] 00:19:00.903 } 00:19:00.903 [2024-11-20 07:29:04.618352] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:00.903 [2024-11-20 07:29:04.618626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76282 ] 00:19:00.903 [2024-11-20 07:29:04.796718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.163 [2024-11-20 07:29:04.922578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.545  [2024-11-20T07:29:07.859Z] Copying: 64/64 [MB] (average 70 MBps) 00:19:03.926 00:19:03.926 00:19:03.926 real 0m2.914s 00:19:03.926 user 0m2.507s 00:19:03.926 sys 0m0.300s 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 ************************************ 00:19:03.926 END TEST dd_copy_to_out_bdev 00:19:03.926 ************************************ 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 ************************************ 00:19:03.926 START TEST dd_offset_magic 00:19:03.926 ************************************ 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:19:03.926 07:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 { 00:19:03.926 "subsystems": [ 00:19:03.926 { 00:19:03.926 "subsystem": "bdev", 00:19:03.926 "config": [ 00:19:03.926 { 00:19:03.926 "params": { 00:19:03.926 "block_size": 4096, 00:19:03.926 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:03.926 "name": "aio1" 00:19:03.926 }, 00:19:03.926 "method": "bdev_aio_create" 00:19:03.926 }, 00:19:03.926 { 00:19:03.926 "params": { 00:19:03.926 "trtype": "pcie", 00:19:03.926 "traddr": "0000:00:10.0", 00:19:03.926 "name": "Nvme0" 00:19:03.926 }, 00:19:03.926 "method": "bdev_nvme_attach_controller" 00:19:03.926 }, 00:19:03.926 { 00:19:03.926 "method": "bdev_wait_for_examine" 00:19:03.926 } 00:19:03.926 ] 00:19:03.926 } 00:19:03.926 ] 00:19:03.926 } 00:19:03.926 [2024-11-20 07:29:07.592000] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:03.926 [2024-11-20 07:29:07.592121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76332 ] 00:19:03.926 [2024-11-20 07:29:07.768659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.187 [2024-11-20 07:29:07.892127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.125  [2024-11-20T07:29:10.442Z] Copying: 65/65 [MB] (average 119 MBps) 00:19:06.509 00:19:06.509 07:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:19:06.509 07:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:06.509 07:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:19:06.509 07:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:19:06.509 { 00:19:06.509 "subsystems": [ 00:19:06.509 { 00:19:06.509 "subsystem": "bdev", 00:19:06.509 "config": [ 00:19:06.509 { 00:19:06.509 "params": { 00:19:06.509 "block_size": 4096, 00:19:06.509 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:06.509 "name": "aio1" 00:19:06.509 }, 00:19:06.509 "method": "bdev_aio_create" 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "params": { 00:19:06.509 "trtype": "pcie", 00:19:06.509 "traddr": "0000:00:10.0", 00:19:06.509 "name": "Nvme0" 00:19:06.509 }, 00:19:06.509 "method": "bdev_nvme_attach_controller" 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "method": "bdev_wait_for_examine" 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 } 00:19:06.509 [2024-11-20 07:29:10.162002] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:06.509 [2024-11-20 07:29:10.162141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76370 ] 00:19:06.509 [2024-11-20 07:29:10.335159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.769 [2024-11-20 07:29:10.458488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.029  [2024-11-20T07:29:12.344Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:08.411 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:19:08.411 07:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:19:08.411 { 00:19:08.411 "subsystems": [ 00:19:08.411 { 00:19:08.411 "subsystem": "bdev", 00:19:08.411 "config": [ 00:19:08.411 { 00:19:08.411 "params": { 00:19:08.411 "block_size": 4096, 00:19:08.411 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:08.411 "name": "aio1" 00:19:08.411 }, 00:19:08.411 "method": "bdev_aio_create" 00:19:08.411 }, 00:19:08.411 { 00:19:08.411 "params": { 00:19:08.411 "trtype": "pcie", 00:19:08.411 "traddr": "0000:00:10.0", 00:19:08.411 "name": "Nvme0" 00:19:08.411 }, 00:19:08.411 "method": "bdev_nvme_attach_controller" 00:19:08.411 }, 00:19:08.411 { 00:19:08.411 "method": "bdev_wait_for_examine" 00:19:08.411 } 00:19:08.411 ] 00:19:08.411 } 00:19:08.411 ] 00:19:08.411 } 00:19:08.411 [2024-11-20 07:29:12.130254] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:08.411 [2024-11-20 07:29:12.130411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76402 ] 00:19:08.411 [2024-11-20 07:29:12.309217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.671 [2024-11-20 07:29:12.453567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.610  [2024-11-20T07:29:14.926Z] Copying: 65/65 [MB] (average 157 MBps) 00:19:10.993 00:19:10.993 07:29:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:19:10.993 07:29:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:10.993 07:29:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:19:10.993 07:29:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:19:10.993 { 00:19:10.993 "subsystems": [ 00:19:10.993 { 00:19:10.993 "subsystem": "bdev", 00:19:10.993 "config": [ 00:19:10.993 { 00:19:10.993 "params": { 00:19:10.993 "block_size": 4096, 00:19:10.993 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:10.993 "name": "aio1" 00:19:10.993 }, 00:19:10.993 "method": "bdev_aio_create" 00:19:10.993 }, 00:19:10.993 { 00:19:10.993 "params": { 00:19:10.993 "trtype": "pcie", 00:19:10.993 "traddr": "0000:00:10.0", 00:19:10.993 "name": "Nvme0" 00:19:10.993 }, 00:19:10.993 "method": "bdev_nvme_attach_controller" 00:19:10.993 }, 00:19:10.993 { 00:19:10.993 "method": "bdev_wait_for_examine" 00:19:10.993 } 00:19:10.993 ] 00:19:10.993 } 00:19:10.994 ] 00:19:10.994 } 00:19:10.994 [2024-11-20 07:29:14.707707] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:10.994 [2024-11-20 07:29:14.707885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76434 ] 00:19:10.994 [2024-11-20 07:29:14.879853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.254 [2024-11-20 07:29:15.004808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.514  [2024-11-20T07:29:16.828Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:12.895 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:12.895 ************************************ 00:19:12.895 END TEST dd_offset_magic 00:19:12.895 ************************************ 00:19:12.895 00:19:12.895 real 0m9.008s 00:19:12.895 user 0m6.687s 00:19:12.895 sys 0m1.254s 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:19:12.895 07:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:12.895 { 00:19:12.895 "subsystems": [ 00:19:12.895 { 00:19:12.895 "subsystem": "bdev", 00:19:12.895 "config": [ 00:19:12.895 { 00:19:12.895 "params": { 00:19:12.895 "block_size": 4096, 00:19:12.895 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:12.895 "name": "aio1" 00:19:12.895 }, 00:19:12.895 "method": "bdev_aio_create" 00:19:12.895 }, 00:19:12.895 { 00:19:12.895 "params": { 00:19:12.895 "trtype": "pcie", 00:19:12.895 "traddr": "0000:00:10.0", 00:19:12.895 "name": "Nvme0" 00:19:12.895 }, 00:19:12.895 "method": "bdev_nvme_attach_controller" 00:19:12.895 }, 00:19:12.895 { 00:19:12.895 "method": "bdev_wait_for_examine" 00:19:12.895 } 00:19:12.895 ] 00:19:12.895 } 00:19:12.895 ] 00:19:12.895 } 00:19:12.895 [2024-11-20 07:29:16.631937] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:12.895 [2024-11-20 07:29:16.632080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76481 ] 00:19:12.895 [2024-11-20 07:29:16.788428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.155 [2024-11-20 07:29:16.913754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.415  [2024-11-20T07:29:18.736Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:19:14.803 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:19:14.803 07:29:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:14.803 { 00:19:14.803 "subsystems": [ 00:19:14.803 { 00:19:14.803 "subsystem": "bdev", 00:19:14.803 "config": [ 00:19:14.803 { 00:19:14.803 "params": { 00:19:14.803 "block_size": 4096, 00:19:14.803 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:19:14.803 "name": "aio1" 00:19:14.803 }, 00:19:14.803 "method": "bdev_aio_create" 00:19:14.803 }, 00:19:14.803 { 00:19:14.803 "params": { 00:19:14.803 "trtype": "pcie", 00:19:14.803 "traddr": "0000:00:10.0", 00:19:14.803 "name": "Nvme0" 00:19:14.803 }, 00:19:14.803 "method": "bdev_nvme_attach_controller" 00:19:14.803 }, 00:19:14.803 { 00:19:14.803 "method": "bdev_wait_for_examine" 00:19:14.803 } 00:19:14.803 ] 00:19:14.803 } 00:19:14.803 ] 00:19:14.803 } 00:19:14.803 [2024-11-20 07:29:18.587683] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:14.803 [2024-11-20 07:29:18.587829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76512 ] 00:19:15.063 [2024-11-20 07:29:18.760495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.063 [2024-11-20 07:29:18.880520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.631  [2024-11-20T07:29:20.502Z] Copying: 5120/5120 [kB] (average 217 MBps) 00:19:16.569 00:19:16.570 07:29:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:19:16.570 ************************************ 00:19:16.570 END TEST spdk_dd_bdev_to_bdev 00:19:16.570 ************************************ 00:19:16.570 00:19:16.570 real 0m20.080s 00:19:16.570 user 0m15.514s 00:19:16.570 sys 0m2.964s 00:19:16.570 07:29:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.570 07:29:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:16.830 07:29:20 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:19:16.830 07:29:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:16.830 07:29:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.830 07:29:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.830 07:29:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:16.830 ************************************ 00:19:16.830 START TEST spdk_dd_sparse 00:19:16.830 ************************************ 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:16.830 * Looking for test storage... 00:19:16.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.830 --rc genhtml_branch_coverage=1 00:19:16.830 --rc genhtml_function_coverage=1 00:19:16.830 --rc genhtml_legend=1 00:19:16.830 --rc geninfo_all_blocks=1 00:19:16.830 --rc geninfo_unexecuted_blocks=1 00:19:16.830 00:19:16.830 ' 00:19:16.830 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.830 --rc genhtml_branch_coverage=1 00:19:16.830 --rc genhtml_function_coverage=1 00:19:16.830 --rc genhtml_legend=1 00:19:16.830 --rc geninfo_all_blocks=1 00:19:16.830 --rc geninfo_unexecuted_blocks=1 00:19:16.830 00:19:16.830 ' 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # export PATH 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:19:16.831 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:19:17.091 1+0 records in 00:19:17.091 1+0 records out 00:19:17.091 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0119412 s, 351 MB/s 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:19:17.091 1+0 records in 00:19:17.091 1+0 records out 00:19:17.091 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00708219 s, 592 MB/s 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:19:17.091 1+0 records in 00:19:17.091 1+0 records out 00:19:17.091 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0124731 s, 336 MB/s 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:17.091 ************************************ 00:19:17.091 START TEST dd_sparse_file_to_file 00:19:17.091 ************************************ 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:19:17.091 07:29:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:17.091 { 00:19:17.091 "subsystems": [ 00:19:17.091 { 00:19:17.091 "subsystem": "bdev", 00:19:17.091 "config": [ 00:19:17.091 { 00:19:17.091 "params": { 00:19:17.091 "block_size": 4096, 00:19:17.092 "filename": "dd_sparse_aio_disk", 00:19:17.092 "name": "dd_aio" 00:19:17.092 }, 00:19:17.092 "method": "bdev_aio_create" 00:19:17.092 }, 00:19:17.092 { 00:19:17.092 "params": { 00:19:17.092 "lvs_name": "dd_lvstore", 00:19:17.092 "bdev_name": "dd_aio" 00:19:17.092 }, 00:19:17.092 "method": "bdev_lvol_create_lvstore" 00:19:17.092 }, 00:19:17.092 { 00:19:17.092 "method": "bdev_wait_for_examine" 00:19:17.092 } 00:19:17.092 ] 00:19:17.092 } 00:19:17.092 ] 00:19:17.092 } 00:19:17.092 [2024-11-20 07:29:20.867903] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:17.092 [2024-11-20 07:29:20.868024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76601 ] 00:19:17.351 [2024-11-20 07:29:21.045871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.351 [2024-11-20 07:29:21.169898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.918  [2024-11-20T07:29:23.230Z] Copying: 12/36 [MB] (average 1200 MBps) 00:19:19.297 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:19.297 00:19:19.297 real 0m2.137s 00:19:19.297 user 0m1.748s 00:19:19.297 sys 0m0.281s 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:19.297 ************************************ 00:19:19.297 END TEST dd_sparse_file_to_file 00:19:19.297 ************************************ 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.297 07:29:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:19.297 ************************************ 00:19:19.297 START TEST dd_sparse_file_to_bdev 00:19:19.297 ************************************ 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:19:19.297 07:29:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:19.297 { 00:19:19.297 "subsystems": [ 00:19:19.297 { 00:19:19.297 "subsystem": "bdev", 00:19:19.297 "config": [ 00:19:19.297 { 00:19:19.297 "params": { 00:19:19.297 "block_size": 4096, 00:19:19.297 "filename": "dd_sparse_aio_disk", 00:19:19.297 "name": "dd_aio" 00:19:19.297 }, 00:19:19.297 "method": "bdev_aio_create" 00:19:19.297 }, 00:19:19.297 { 00:19:19.297 "params": { 00:19:19.297 "lvs_name": "dd_lvstore", 00:19:19.297 "lvol_name": "dd_lvol", 00:19:19.297 "size_in_mib": 36, 00:19:19.297 "thin_provision": true 00:19:19.297 }, 00:19:19.297 "method": "bdev_lvol_create" 00:19:19.297 }, 00:19:19.297 { 00:19:19.297 "method": "bdev_wait_for_examine" 00:19:19.297 } 00:19:19.297 ] 00:19:19.297 } 00:19:19.297 ] 00:19:19.297 } 00:19:19.297 [2024-11-20 07:29:23.072692] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:19.297 [2024-11-20 07:29:23.072995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76660 ] 00:19:19.556 [2024-11-20 07:29:23.265559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.556 [2024-11-20 07:29:23.388024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.125  [2024-11-20T07:29:25.437Z] Copying: 12/36 [MB] (average 521 MBps) 00:19:21.504 00:19:21.504 00:19:21.504 real 0m2.139s 00:19:21.504 user 0m1.775s 00:19:21.504 sys 0m0.275s 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 ************************************ 00:19:21.504 END TEST dd_sparse_file_to_bdev 00:19:21.504 ************************************ 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 ************************************ 00:19:21.504 START TEST dd_sparse_bdev_to_file 00:19:21.504 ************************************ 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:19:21.504 07:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 { 00:19:21.504 "subsystems": [ 00:19:21.504 { 00:19:21.504 "subsystem": "bdev", 00:19:21.504 "config": [ 00:19:21.504 { 00:19:21.504 "params": { 00:19:21.504 "block_size": 4096, 00:19:21.504 "filename": "dd_sparse_aio_disk", 00:19:21.504 "name": "dd_aio" 00:19:21.504 }, 00:19:21.504 "method": "bdev_aio_create" 00:19:21.504 }, 00:19:21.504 { 00:19:21.504 "method": "bdev_wait_for_examine" 00:19:21.504 } 00:19:21.504 ] 00:19:21.504 } 00:19:21.504 ] 00:19:21.504 } 00:19:21.504 [2024-11-20 07:29:25.270044] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:21.504 [2024-11-20 07:29:25.270247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76709 ] 00:19:21.763 [2024-11-20 07:29:25.443252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.764 [2024-11-20 07:29:25.565334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.347  [2024-11-20T07:29:27.659Z] Copying: 12/36 [MB] (average 1200 MBps) 00:19:23.726 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:23.726 00:19:23.726 real 0m2.148s 00:19:23.726 user 0m1.772s 00:19:23.726 sys 0m0.278s 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:23.726 ************************************ 00:19:23.726 END TEST dd_sparse_bdev_to_file 00:19:23.726 ************************************ 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:19:23.726 07:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:19:23.726 00:19:23.726 real 0m6.909s 00:19:23.726 user 0m5.476s 00:19:23.727 sys 0m1.158s 00:19:23.727 07:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.727 07:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:23.727 ************************************ 00:19:23.727 END TEST spdk_dd_sparse 00:19:23.727 ************************************ 00:19:23.727 07:29:27 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:23.727 07:29:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:23.727 07:29:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.727 07:29:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:23.727 ************************************ 00:19:23.727 START TEST spdk_dd_negative 00:19:23.727 ************************************ 00:19:23.727 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:23.727 * Looking for test storage... 00:19:23.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:23.727 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:23.727 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:19:23.727 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.988 --rc genhtml_branch_coverage=1 00:19:23.988 --rc genhtml_function_coverage=1 00:19:23.988 --rc genhtml_legend=1 00:19:23.988 --rc geninfo_all_blocks=1 00:19:23.988 --rc geninfo_unexecuted_blocks=1 00:19:23.988 00:19:23.988 ' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.988 --rc genhtml_branch_coverage=1 00:19:23.988 --rc genhtml_function_coverage=1 00:19:23.988 --rc genhtml_legend=1 00:19:23.988 --rc geninfo_all_blocks=1 00:19:23.988 --rc geninfo_unexecuted_blocks=1 00:19:23.988 00:19:23.988 ' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.988 --rc genhtml_branch_coverage=1 00:19:23.988 --rc genhtml_function_coverage=1 00:19:23.988 --rc genhtml_legend=1 00:19:23.988 --rc geninfo_all_blocks=1 00:19:23.988 --rc geninfo_unexecuted_blocks=1 00:19:23.988 00:19:23.988 ' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.988 --rc genhtml_branch_coverage=1 00:19:23.988 --rc genhtml_function_coverage=1 00:19:23.988 --rc genhtml_legend=1 00:19:23.988 --rc geninfo_all_blocks=1 00:19:23.988 --rc geninfo_unexecuted_blocks=1 00:19:23.988 00:19:23.988 ' 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:23.988 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # export PATH 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:23.989 ************************************ 00:19:23.989 START TEST dd_invalid_arguments 00:19:23.989 ************************************ 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:23.989 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:23.989 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:19:23.989 00:19:23.989 CPU options: 00:19:23.989 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:19:23.989 (like [0,1,10]) 00:19:23.989 --lcores lcore to CPU mapping list. The list is in the format: 00:19:23.989 [<,lcores[@CPUs]>...] 00:19:23.989 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:19:23.989 Within the group, '-' is used for range separator, 00:19:23.989 ',' is used for single number separator. 00:19:23.989 '( )' can be omitted for single element group, 00:19:23.989 '@' can be omitted if cpus and lcores have the same value 00:19:23.989 --disable-cpumask-locks Disable CPU core lock files. 00:19:23.989 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:19:23.989 pollers in the app support interrupt mode) 00:19:23.989 -p, --main-core main (primary) core for DPDK 00:19:23.989 00:19:23.989 Configuration options: 00:19:23.989 -c, --config, --json JSON config file 00:19:23.989 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:19:23.989 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:19:23.989 --wait-for-rpc wait for RPCs to initialize subsystems 00:19:23.989 --rpcs-allowed comma-separated list of permitted RPCS 00:19:23.989 --json-ignore-init-errors don't exit on invalid config entry 00:19:23.989 00:19:23.989 Memory options: 00:19:23.989 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:19:23.989 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:19:23.989 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:19:23.989 -R, --huge-unlink unlink huge files after initialization 00:19:23.989 -n, --mem-channels number of memory channels used for DPDK 00:19:23.989 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:19:23.989 --msg-mempool-size global message memory pool size in count (default: 262143) 00:19:23.989 --no-huge run without using hugepages 00:19:23.989 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:19:23.989 -i, --shm-id shared memory ID (optional) 00:19:23.989 -g, --single-file-segments force creating just one hugetlbfs file 00:19:23.989 00:19:23.989 PCI options: 00:19:23.989 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:19:23.989 -B, --pci-blocked pci addr to block (can be used more than once) 00:19:23.989 -u, --no-pci disable PCI access 00:19:23.989 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:19:23.989 00:19:23.989 Log options: 00:19:23.989 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:19:23.989 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:19:23.989 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:19:23.989 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:19:23.989 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:19:23.989 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:19:23.989 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:19:23.989 sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, vbdev_gpt, 00:19:23.989 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:19:23.989 vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, 00:19:23.989 virtio_user, virtio_vfio_user, vmd) 00:19:23.989 --silence-noticelog disable notice level logging to stderr 00:19:23.989 00:19:23.989 Trace options: 00:19:23.989 --num-trace-entries number of trace entries for each core, must be power of 2, 00:19:23.989 setting 0 to disable trace (default 32768) 00:19:23.989 Tracepoints vary in size and can use more than one trace entry. 00:19:23.989 -e, --tpoint-group [:] 00:19:23.989 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:19:23.989 [2024-11-20 07:29:27.766341] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:19:23.989 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:19:23.989 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:19:23.989 bdev_raid, scheduler, all). 00:19:23.989 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:19:23.989 a tracepoint group. First tpoint inside a group can be enabled by 00:19:23.989 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:19:23.989 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:19:23.989 in /include/spdk_internal/trace_defs.h 00:19:23.989 00:19:23.989 Other options: 00:19:23.989 -h, --help show this usage 00:19:23.989 -v, --version print SPDK version 00:19:23.989 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:19:23.989 --env-context Opaque context for use of the env implementation 00:19:23.989 00:19:23.989 Application specific: 00:19:23.989 [--------- DD Options ---------] 00:19:23.989 --if Input file. Must specify either --if or --ib. 00:19:23.989 --ib Input bdev. Must specifier either --if or --ib 00:19:23.989 --of Output file. Must specify either --of or --ob. 00:19:23.989 --ob Output bdev. Must specify either --of or --ob. 00:19:23.989 --iflag Input file flags. 00:19:23.989 --oflag Output file flags. 00:19:23.989 --bs I/O unit size (default: 4096) 00:19:23.989 --qd Queue depth (default: 2) 00:19:23.989 --count I/O unit count. The number of I/O units to copy. (default: all) 00:19:23.989 --skip Skip this many I/O units at start of input. (default: 0) 00:19:23.989 --seek Skip this many I/O units at start of output. (default: 0) 00:19:23.989 --aio Force usage of AIO. (by default io_uring is used if available) 00:19:23.989 --sparse Enable hole skipping in input target 00:19:23.989 Available iflag and oflag values: 00:19:23.989 append - append mode 00:19:23.990 direct - use direct I/O for data 00:19:23.990 directory - fail unless a directory 00:19:23.990 dsync - use synchronized I/O for data 00:19:23.990 noatime - do not update access time 00:19:23.990 noctty - do not assign controlling terminal from file 00:19:23.990 nofollow - do not follow symlinks 00:19:23.990 nonblock - use non-blocking I/O 00:19:23.990 sync - use synchronized I/O for data and metadata 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.990 00:19:23.990 real 0m0.125s 00:19:23.990 user 0m0.068s 00:19:23.990 sys 0m0.058s 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:19:23.990 ************************************ 00:19:23.990 END TEST dd_invalid_arguments 00:19:23.990 ************************************ 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:23.990 ************************************ 00:19:23.990 START TEST dd_double_input 00:19:23.990 ************************************ 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:23.990 07:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:24.250 [2024-11-20 07:29:27.937099] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:19:24.250 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:19:24.250 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.250 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.250 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.250 00:19:24.250 real 0m0.127s 00:19:24.250 user 0m0.070s 00:19:24.250 sys 0m0.058s 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 ************************************ 00:19:24.251 END TEST dd_double_input 00:19:24.251 ************************************ 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 ************************************ 00:19:24.251 START TEST dd_double_output 00:19:24.251 ************************************ 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:24.251 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:24.251 [2024-11-20 07:29:28.126161] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.511 00:19:24.511 real 0m0.133s 00:19:24.511 user 0m0.070s 00:19:24.511 sys 0m0.064s 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:19:24.511 ************************************ 00:19:24.511 END TEST dd_double_output 00:19:24.511 ************************************ 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:24.511 ************************************ 00:19:24.511 START TEST dd_no_input 00:19:24.511 ************************************ 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:24.511 [2024-11-20 07:29:28.310142] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.511 00:19:24.511 real 0m0.126s 00:19:24.511 user 0m0.068s 00:19:24.511 sys 0m0.058s 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:19:24.511 ************************************ 00:19:24.511 END TEST dd_no_input 00:19:24.511 ************************************ 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:24.511 ************************************ 00:19:24.511 START TEST dd_no_output 00:19:24.511 ************************************ 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:24.511 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:24.774 [2024-11-20 07:29:28.496500] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.774 00:19:24.774 real 0m0.123s 00:19:24.774 user 0m0.063s 00:19:24.774 sys 0m0.061s 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:19:24.774 ************************************ 00:19:24.774 END TEST dd_no_output 00:19:24.774 ************************************ 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:24.774 ************************************ 00:19:24.774 START TEST dd_wrong_blocksize 00:19:24.774 ************************************ 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:24.774 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:24.774 [2024-11-20 07:29:28.677151] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.035 00:19:25.035 real 0m0.138s 00:19:25.035 user 0m0.078s 00:19:25.035 sys 0m0.061s 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:19:25.035 ************************************ 00:19:25.035 END TEST dd_wrong_blocksize 00:19:25.035 ************************************ 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:25.035 ************************************ 00:19:25.035 START TEST dd_smaller_blocksize 00:19:25.035 ************************************ 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:25.035 07:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:25.035 [2024-11-20 07:29:28.868450] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:25.035 [2024-11-20 07:29:28.868570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 00:19:25.295 [2024-11-20 07:29:29.064295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.295 [2024-11-20 07:29:29.188810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.864 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:26.124 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:26.124 [2024-11-20 07:29:30.001029] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:19:26.124 [2024-11-20 07:29:30.001102] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:27.064 [2024-11-20 07:29:30.915084] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.324 00:19:27.324 real 0m2.388s 00:19:27.324 user 0m1.611s 00:19:27.324 sys 0m0.676s 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:19:27.324 ************************************ 00:19:27.324 END TEST dd_smaller_blocksize 00:19:27.324 ************************************ 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.324 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:27.584 ************************************ 00:19:27.584 START TEST dd_invalid_count 00:19:27.584 ************************************ 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.584 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:27.585 [2024-11-20 07:29:31.313800] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.585 00:19:27.585 real 0m0.120s 00:19:27.585 user 0m0.061s 00:19:27.585 sys 0m0.059s 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:19:27.585 ************************************ 00:19:27.585 END TEST dd_invalid_count 00:19:27.585 ************************************ 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:27.585 ************************************ 00:19:27.585 START TEST dd_invalid_oflag 00:19:27.585 ************************************ 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:27.585 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:27.585 [2024-11-20 07:29:31.496658] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.845 00:19:27.845 real 0m0.121s 00:19:27.845 user 0m0.059s 00:19:27.845 sys 0m0.064s 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:19:27.845 ************************************ 00:19:27.845 END TEST dd_invalid_oflag 00:19:27.845 ************************************ 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:19:27.845 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:27.846 ************************************ 00:19:27.846 START TEST dd_invalid_iflag 00:19:27.846 ************************************ 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:27.846 [2024-11-20 07:29:31.679960] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.846 00:19:27.846 real 0m0.126s 00:19:27.846 user 0m0.068s 00:19:27.846 sys 0m0.059s 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.846 07:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:19:27.846 ************************************ 00:19:27.846 END TEST dd_invalid_iflag 00:19:27.846 ************************************ 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:28.106 ************************************ 00:19:28.106 START TEST dd_unknown_flag 00:19:28.106 ************************************ 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:28.106 07:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:28.106 [2024-11-20 07:29:31.872525] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:28.106 [2024-11-20 07:29:31.872645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77075 ] 00:19:28.366 [2024-11-20 07:29:32.045483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.366 [2024-11-20 07:29:32.200632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.647  [2024-11-20T07:29:32.580Z] Copying: 0/0 [B] (average 0 Bps)[2024-11-20 07:29:32.539915] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:19:28.647 [2024-11-20 07:29:32.539978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:28.647 [2024-11-20 07:29:32.540133] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:19:29.604 [2024-11-20 07:29:33.458819] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:29.863 00:19:29.863 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.863 00:19:29.863 real 0m1.965s 00:19:29.863 user 0m1.615s 00:19:29.863 sys 0m0.238s 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.863 07:29:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:19:29.863 ************************************ 00:19:29.863 END TEST dd_unknown_flag 00:19:29.863 ************************************ 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:30.123 ************************************ 00:19:30.123 START TEST dd_invalid_json 00:19:30.123 ************************************ 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:30.123 07:29:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:30.124 [2024-11-20 07:29:33.909089] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:30.124 [2024-11-20 07:29:33.909214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77120 ] 00:19:30.384 [2024-11-20 07:29:34.081205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.384 [2024-11-20 07:29:34.203297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.384 [2024-11-20 07:29:34.203381] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:19:30.384 [2024-11-20 07:29:34.203397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:30.384 [2024-11-20 07:29:34.203409] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:30.384 [2024-11-20 07:29:34.203469] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.644 00:19:30.644 real 0m0.642s 00:19:30.644 user 0m0.420s 00:19:30.644 sys 0m0.124s 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:19:30.644 ************************************ 00:19:30.644 END TEST dd_invalid_json 00:19:30.644 ************************************ 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:30.644 ************************************ 00:19:30.644 START TEST dd_invalid_seek 00:19:30.644 ************************************ 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.644 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.645 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:30.645 07:29:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:30.905 { 00:19:30.905 "subsystems": [ 00:19:30.905 { 00:19:30.905 "subsystem": "bdev", 00:19:30.905 "config": [ 00:19:30.905 { 00:19:30.905 "params": { 00:19:30.905 "block_size": 512, 00:19:30.905 "num_blocks": 512, 00:19:30.905 "name": "malloc0" 00:19:30.905 }, 00:19:30.905 "method": "bdev_malloc_create" 00:19:30.905 }, 00:19:30.905 { 00:19:30.905 "params": { 00:19:30.905 "block_size": 512, 00:19:30.905 "num_blocks": 512, 00:19:30.905 "name": "malloc1" 00:19:30.905 }, 00:19:30.905 "method": "bdev_malloc_create" 00:19:30.905 }, 00:19:30.905 { 00:19:30.905 "method": "bdev_wait_for_examine" 00:19:30.905 } 00:19:30.905 ] 00:19:30.905 } 00:19:30.905 ] 00:19:30.905 } 00:19:30.905 [2024-11-20 07:29:34.614474] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:30.905 [2024-11-20 07:29:34.614592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77155 ] 00:19:30.905 [2024-11-20 07:29:34.789381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.165 [2024-11-20 07:29:34.913962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.425 [2024-11-20 07:29:35.258497] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:19:31.425 [2024-11-20 07:29:35.258650] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:32.366 [2024-11-20 07:29:36.194014] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:19:32.625 ************************************ 00:19:32.625 END TEST dd_invalid_seek 00:19:32.625 ************************************ 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.625 00:19:32.625 real 0m1.928s 00:19:32.625 user 0m1.604s 00:19:32.625 sys 0m0.257s 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:32.625 ************************************ 00:19:32.625 START TEST dd_invalid_skip 00:19:32.625 ************************************ 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:19:32.625 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:19:32.885 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.885 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.885 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.886 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.886 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.886 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.886 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:32.886 07:29:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:32.886 { 00:19:32.886 "subsystems": [ 00:19:32.886 { 00:19:32.886 "subsystem": "bdev", 00:19:32.886 "config": [ 00:19:32.886 { 00:19:32.886 "params": { 00:19:32.886 "block_size": 512, 00:19:32.886 "num_blocks": 512, 00:19:32.886 "name": "malloc0" 00:19:32.886 }, 00:19:32.886 "method": "bdev_malloc_create" 00:19:32.886 }, 00:19:32.886 { 00:19:32.886 "params": { 00:19:32.886 "block_size": 512, 00:19:32.886 "num_blocks": 512, 00:19:32.886 "name": "malloc1" 00:19:32.886 }, 00:19:32.886 "method": "bdev_malloc_create" 00:19:32.886 }, 00:19:32.886 { 00:19:32.886 "method": "bdev_wait_for_examine" 00:19:32.886 } 00:19:32.886 ] 00:19:32.886 } 00:19:32.886 ] 00:19:32.886 } 00:19:32.886 [2024-11-20 07:29:36.613138] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:32.886 [2024-11-20 07:29:36.613253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77202 ] 00:19:32.886 [2024-11-20 07:29:36.788331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.146 [2024-11-20 07:29:36.910640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.407 [2024-11-20 07:29:37.259629] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:19:33.407 [2024-11-20 07:29:37.259792] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:34.346 [2024-11-20 07:29:38.170911] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.606 00:19:34.606 real 0m1.890s 00:19:34.606 user 0m1.592s 00:19:34.606 sys 0m0.234s 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:19:34.606 ************************************ 00:19:34.606 END TEST dd_invalid_skip 00:19:34.606 ************************************ 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.606 07:29:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:34.606 ************************************ 00:19:34.606 START TEST dd_invalid_input_count 00:19:34.606 ************************************ 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:34.607 07:29:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:34.607 { 00:19:34.607 "subsystems": [ 00:19:34.607 { 00:19:34.607 "subsystem": "bdev", 00:19:34.607 "config": [ 00:19:34.607 { 00:19:34.607 "params": { 00:19:34.607 "block_size": 512, 00:19:34.607 "num_blocks": 512, 00:19:34.607 "name": "malloc0" 00:19:34.607 }, 00:19:34.607 "method": "bdev_malloc_create" 00:19:34.607 }, 00:19:34.607 { 00:19:34.607 "params": { 00:19:34.607 "block_size": 512, 00:19:34.607 "num_blocks": 512, 00:19:34.607 "name": "malloc1" 00:19:34.607 }, 00:19:34.607 "method": "bdev_malloc_create" 00:19:34.607 }, 00:19:34.607 { 00:19:34.607 "method": "bdev_wait_for_examine" 00:19:34.607 } 00:19:34.607 ] 00:19:34.607 } 00:19:34.607 ] 00:19:34.607 } 00:19:34.867 [2024-11-20 07:29:38.565394] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:34.867 [2024-11-20 07:29:38.565612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77253 ] 00:19:34.867 [2024-11-20 07:29:38.737572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.127 [2024-11-20 07:29:38.850067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.399 [2024-11-20 07:29:39.196550] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:19:35.399 [2024-11-20 07:29:39.196610] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:36.341 [2024-11-20 07:29:40.099881] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:19:36.602 ************************************ 00:19:36.602 END TEST dd_invalid_input_count 00:19:36.602 ************************************ 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.602 00:19:36.602 real 0m1.879s 00:19:36.602 user 0m1.571s 00:19:36.602 sys 0m0.245s 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:36.602 ************************************ 00:19:36.602 START TEST dd_invalid_output_count 00:19:36.602 ************************************ 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:36.602 07:29:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:36.602 { 00:19:36.602 "subsystems": [ 00:19:36.602 { 00:19:36.602 "subsystem": "bdev", 00:19:36.602 "config": [ 00:19:36.602 { 00:19:36.602 "params": { 00:19:36.602 "block_size": 512, 00:19:36.602 "num_blocks": 512, 00:19:36.602 "name": "malloc0" 00:19:36.602 }, 00:19:36.602 "method": "bdev_malloc_create" 00:19:36.602 }, 00:19:36.602 { 00:19:36.602 "method": "bdev_wait_for_examine" 00:19:36.602 } 00:19:36.602 ] 00:19:36.602 } 00:19:36.602 ] 00:19:36.602 } 00:19:36.602 [2024-11-20 07:29:40.512702] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:36.602 [2024-11-20 07:29:40.513135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77299 ] 00:19:36.862 [2024-11-20 07:29:40.686081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.122 [2024-11-20 07:29:40.803940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.382 [2024-11-20 07:29:41.133280] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:19:37.382 [2024-11-20 07:29:41.133358] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:38.348 [2024-11-20 07:29:42.032152] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.608 00:19:38.608 real 0m1.858s 00:19:38.608 user 0m1.548s 00:19:38.608 sys 0m0.238s 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:19:38.608 ************************************ 00:19:38.608 END TEST dd_invalid_output_count 00:19:38.608 ************************************ 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:38.608 ************************************ 00:19:38.608 START TEST dd_bs_not_multiple 00:19:38.608 ************************************ 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:38.608 07:29:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:38.608 { 00:19:38.608 "subsystems": [ 00:19:38.608 { 00:19:38.608 "subsystem": "bdev", 00:19:38.608 "config": [ 00:19:38.608 { 00:19:38.608 "params": { 00:19:38.608 "block_size": 512, 00:19:38.608 "num_blocks": 512, 00:19:38.608 "name": "malloc0" 00:19:38.608 }, 00:19:38.608 "method": "bdev_malloc_create" 00:19:38.608 }, 00:19:38.608 { 00:19:38.608 "params": { 00:19:38.608 "block_size": 512, 00:19:38.608 "num_blocks": 512, 00:19:38.608 "name": "malloc1" 00:19:38.608 }, 00:19:38.608 "method": "bdev_malloc_create" 00:19:38.608 }, 00:19:38.608 { 00:19:38.608 "method": "bdev_wait_for_examine" 00:19:38.608 } 00:19:38.608 ] 00:19:38.608 } 00:19:38.608 ] 00:19:38.608 } 00:19:38.608 [2024-11-20 07:29:42.441029] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:38.608 [2024-11-20 07:29:42.441159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77343 ] 00:19:38.867 [2024-11-20 07:29:42.614551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.867 [2024-11-20 07:29:42.731887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.441 [2024-11-20 07:29:43.070751] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:19:39.441 [2024-11-20 07:29:43.070815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:40.380 [2024-11-20 07:29:43.950648] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.380 00:19:40.380 real 0m1.851s 00:19:40.380 user 0m1.521s 00:19:40.380 sys 0m0.265s 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:19:40.380 ************************************ 00:19:40.380 END TEST dd_bs_not_multiple 00:19:40.380 ************************************ 00:19:40.380 ************************************ 00:19:40.380 END TEST spdk_dd_negative 00:19:40.380 ************************************ 00:19:40.380 00:19:40.380 real 0m16.788s 00:19:40.380 user 0m12.473s 00:19:40.380 sys 0m3.740s 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.380 07:29:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:40.640 00:19:40.640 real 2m52.048s 00:19:40.640 user 2m17.538s 00:19:40.640 sys 0m24.538s 00:19:40.640 07:29:44 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.640 07:29:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:40.640 ************************************ 00:19:40.640 END TEST spdk_dd 00:19:40.640 ************************************ 00:19:40.640 07:29:44 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:19:40.640 07:29:44 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:40.640 07:29:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.640 07:29:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.640 07:29:44 -- common/autotest_common.sh@10 -- # set +x 00:19:40.640 ************************************ 00:19:40.640 START TEST blockdev_nvme 00:19:40.640 ************************************ 00:19:40.640 07:29:44 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:40.640 * Looking for test storage... 00:19:40.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:40.640 07:29:44 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.640 07:29:44 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.640 07:29:44 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.900 07:29:44 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.900 07:29:44 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.901 07:29:44 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.901 --rc genhtml_branch_coverage=1 00:19:40.901 --rc genhtml_function_coverage=1 00:19:40.901 --rc genhtml_legend=1 00:19:40.901 --rc geninfo_all_blocks=1 00:19:40.901 --rc geninfo_unexecuted_blocks=1 00:19:40.901 00:19:40.901 ' 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.901 --rc genhtml_branch_coverage=1 00:19:40.901 --rc genhtml_function_coverage=1 00:19:40.901 --rc genhtml_legend=1 00:19:40.901 --rc geninfo_all_blocks=1 00:19:40.901 --rc geninfo_unexecuted_blocks=1 00:19:40.901 00:19:40.901 ' 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.901 --rc genhtml_branch_coverage=1 00:19:40.901 --rc genhtml_function_coverage=1 00:19:40.901 --rc genhtml_legend=1 00:19:40.901 --rc geninfo_all_blocks=1 00:19:40.901 --rc geninfo_unexecuted_blocks=1 00:19:40.901 00:19:40.901 ' 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.901 --rc genhtml_branch_coverage=1 00:19:40.901 --rc genhtml_function_coverage=1 00:19:40.901 --rc genhtml_legend=1 00:19:40.901 --rc geninfo_all_blocks=1 00:19:40.901 --rc geninfo_unexecuted_blocks=1 00:19:40.901 00:19:40.901 ' 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:40.901 07:29:44 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77459 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:40.901 07:29:44 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77459 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 77459 ']' 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.901 07:29:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.901 [2024-11-20 07:29:44.692162] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:40.901 [2024-11-20 07:29:44.692268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77459 ] 00:19:41.161 [2024-11-20 07:29:44.868128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.161 [2024-11-20 07:29:44.987455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.100 07:29:45 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.100 07:29:45 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:42.100 07:29:45 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:19:42.100 07:29:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.100 07:29:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3828e86c-7884-423c-b3e8-0dc7d54ca628"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3828e86c-7884-423c-b3e8-0dc7d54ca628",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:42.361 07:29:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 77459 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 77459 ']' 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 77459 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77459 00:19:42.361 killing process with pid 77459 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77459' 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 77459 00:19:42.361 07:29:46 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 77459 00:19:44.900 07:29:48 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:44.901 07:29:48 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:44.901 07:29:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:44.901 07:29:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.901 07:29:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:44.901 ************************************ 00:19:44.901 START TEST bdev_hello_world 00:19:44.901 ************************************ 00:19:44.901 07:29:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:44.901 [2024-11-20 07:29:48.467834] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:44.901 [2024-11-20 07:29:48.467968] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77539 ] 00:19:44.901 [2024-11-20 07:29:48.641347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.901 [2024-11-20 07:29:48.763867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.473 [2024-11-20 07:29:49.206589] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:45.473 [2024-11-20 07:29:49.206641] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:45.473 [2024-11-20 07:29:49.206674] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:45.473 [2024-11-20 07:29:49.209577] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:45.473 [2024-11-20 07:29:49.210072] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:45.473 [2024-11-20 07:29:49.210113] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:45.473 [2024-11-20 07:29:49.210343] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:45.473 00:19:45.473 [2024-11-20 07:29:49.210393] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:46.887 00:19:46.887 real 0m1.934s 00:19:46.887 user 0m1.600s 00:19:46.887 sys 0m0.235s 00:19:46.887 07:29:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.887 07:29:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:46.887 ************************************ 00:19:46.887 END TEST bdev_hello_world 00:19:46.887 ************************************ 00:19:46.887 07:29:50 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:46.887 07:29:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:46.887 07:29:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.887 07:29:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.887 ************************************ 00:19:46.887 START TEST bdev_bounds 00:19:46.887 ************************************ 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=77580 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:46.887 Process bdevio pid: 77580 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 77580' 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 77580 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 77580 ']' 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.887 07:29:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:46.887 [2024-11-20 07:29:50.473398] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:46.887 [2024-11-20 07:29:50.473526] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:19:46.887 [2024-11-20 07:29:50.649482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.887 [2024-11-20 07:29:50.779743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.887 [2024-11-20 07:29:50.779865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.887 [2024-11-20 07:29:50.779907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.458 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.458 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:47.458 07:29:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:47.718 I/O targets: 00:19:47.718 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:47.718 00:19:47.718 00:19:47.718 CUnit - A unit testing framework for C - Version 2.1-3 00:19:47.718 http://cunit.sourceforge.net/ 00:19:47.718 00:19:47.718 00:19:47.718 Suite: bdevio tests on: Nvme0n1 00:19:47.718 Test: blockdev write read block ...passed 00:19:47.718 Test: blockdev write zeroes read block ...passed 00:19:47.718 Test: blockdev write zeroes read no split ...passed 00:19:47.718 Test: blockdev write zeroes read split ...passed 00:19:47.718 Test: blockdev write zeroes read split partial ...passed 00:19:47.718 Test: blockdev reset ...[2024-11-20 07:29:51.468901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:19:47.718 [2024-11-20 07:29:51.472758] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:19:47.718 passed 00:19:47.718 Test: blockdev write read 8 blocks ...passed 00:19:47.718 Test: blockdev write read size > 128k ...passed 00:19:47.718 Test: blockdev write read invalid size ...passed 00:19:47.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:47.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:47.718 Test: blockdev write read max offset ...passed 00:19:47.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:47.718 Test: blockdev writev readv 8 blocks ...passed 00:19:47.718 Test: blockdev writev readv 30 x 1block ...passed 00:19:47.718 Test: blockdev writev readv block ...passed 00:19:47.719 Test: blockdev writev readv size > 128k ...passed 00:19:47.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:47.719 Test: blockdev comparev and writev ...[2024-11-20 07:29:51.481219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b820d000 len:0x1000 00:19:47.719 [2024-11-20 07:29:51.481292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:47.719 passed 00:19:47.719 Test: blockdev nvme passthru rw ...passed 00:19:47.719 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:29:51.482129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:47.719 [2024-11-20 07:29:51.482175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:47.719 passed 00:19:47.719 Test: blockdev nvme admin passthru ...passed 00:19:47.719 Test: blockdev copy ...passed 00:19:47.719 00:19:47.719 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.719 suites 1 1 n/a 0 0 00:19:47.719 tests 23 23 23 0 0 00:19:47.719 asserts 152 152 152 0 n/a 00:19:47.719 00:19:47.719 Elapsed time = 0.245 seconds 00:19:47.719 0 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 77580 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 77580 ']' 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 77580 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77580 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.719 killing process with pid 77580 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77580' 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 77580 00:19:47.719 07:29:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 77580 00:19:49.101 07:29:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:49.101 00:19:49.101 real 0m2.293s 00:19:49.101 user 0m5.636s 00:19:49.101 sys 0m0.356s 00:19:49.101 07:29:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.101 07:29:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:49.101 ************************************ 00:19:49.101 END TEST bdev_bounds 00:19:49.101 ************************************ 00:19:49.101 07:29:52 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:49.101 07:29:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:49.101 07:29:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.101 07:29:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.101 ************************************ 00:19:49.101 START TEST bdev_nbd 00:19:49.101 ************************************ 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=77635 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:49.101 07:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 77635 /var/tmp/spdk-nbd.sock 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 77635 ']' 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.102 07:29:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:49.102 [2024-11-20 07:29:52.845833] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:49.102 [2024-11-20 07:29:52.845952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.361 [2024-11-20 07:29:53.024607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.361 [2024-11-20 07:29:53.135144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:49.932 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.192 1+0 records in 00:19:50.192 1+0 records out 00:19:50.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136205 s, 3.0 MB/s 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:50.192 07:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.192 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:50.192 { 00:19:50.192 "nbd_device": "/dev/nbd0", 00:19:50.192 "bdev_name": "Nvme0n1" 00:19:50.192 } 00:19:50.192 ]' 00:19:50.192 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:50.193 { 00:19:50.193 "nbd_device": "/dev/nbd0", 00:19:50.193 "bdev_name": "Nvme0n1" 00:19:50.193 } 00:19:50.193 ]' 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.193 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.453 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:50.712 07:29:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:50.713 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:19:50.973 /dev/nbd0 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.973 1+0 records in 00:19:50.973 1+0 records out 00:19:50.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683976 s, 6.0 MB/s 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.973 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:51.234 { 00:19:51.234 "nbd_device": "/dev/nbd0", 00:19:51.234 "bdev_name": "Nvme0n1" 00:19:51.234 } 00:19:51.234 ]' 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:51.234 { 00:19:51.234 "nbd_device": "/dev/nbd0", 00:19:51.234 "bdev_name": "Nvme0n1" 00:19:51.234 } 00:19:51.234 ]' 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:51.234 256+0 records in 00:19:51.234 256+0 records out 00:19:51.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130455 s, 80.4 MB/s 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:51.234 07:29:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:51.234 256+0 records in 00:19:51.234 256+0 records out 00:19:51.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707132 s, 14.8 MB/s 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.234 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.494 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:51.754 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:52.013 malloc_lvol_verify 00:19:52.013 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:52.013 116e0b78-1965-47ce-823f-0aea1df4a305 00:19:52.279 07:29:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:52.279 6833c625-3fa1-4b29-8bf6-9220a673cefe 00:19:52.279 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:52.551 /dev/nbd0 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:52.551 mke2fs 1.47.0 (5-Feb-2023) 00:19:52.551 00:19:52.551 Filesystem too small for a journal 00:19:52.551 Discarding device blocks: 0/1024 done 00:19:52.551 Creating filesystem with 1024 4k blocks and 1024 inodes 00:19:52.551 00:19:52.551 Allocating group tables: 0/1 done 00:19:52.551 Writing inode tables: 0/1 done 00:19:52.551 Writing superblocks and filesystem accounting information: 0/1 done 00:19:52.551 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.551 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.811 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 77635 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 77635 ']' 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 77635 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77635 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77635' 00:19:52.812 killing process with pid 77635 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 77635 00:19:52.812 07:29:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 77635 00:19:54.193 ************************************ 00:19:54.193 END TEST bdev_nbd 00:19:54.193 ************************************ 00:19:54.193 07:29:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:54.193 00:19:54.193 real 0m5.089s 00:19:54.193 user 0m6.877s 00:19:54.193 sys 0m1.193s 00:19:54.193 07:29:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.193 07:29:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:54.193 07:29:57 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:54.193 07:29:57 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:19:54.193 skipping fio tests on NVMe due to multi-ns failures. 00:19:54.193 07:29:57 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:54.193 07:29:57 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:54.193 07:29:57 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:54.193 07:29:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:54.193 07:29:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.193 07:29:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.193 ************************************ 00:19:54.193 START TEST bdev_verify 00:19:54.193 ************************************ 00:19:54.193 07:29:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:54.193 [2024-11-20 07:29:57.981462] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:19:54.193 [2024-11-20 07:29:57.981636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77811 ] 00:19:54.453 [2024-11-20 07:29:58.155725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:54.453 [2024-11-20 07:29:58.286712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.453 [2024-11-20 07:29:58.286777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.023 Running I/O for 5 seconds... 00:19:56.902 22656.00 IOPS, 88.50 MiB/s [2024-11-20T07:30:02.216Z] 22688.00 IOPS, 88.62 MiB/s [2024-11-20T07:30:03.156Z] 22549.33 IOPS, 88.08 MiB/s [2024-11-20T07:30:04.096Z] 22560.00 IOPS, 88.12 MiB/s [2024-11-20T07:30:04.096Z] 22528.00 IOPS, 88.00 MiB/s 00:20:00.163 Latency(us) 00:20:00.163 [2024-11-20T07:30:04.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.163 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:00.163 Verification LBA range: start 0x0 length 0xa0000 00:20:00.163 Nvme0n1 : 5.01 11225.46 43.85 0.00 0.00 11345.05 912.21 27473.61 00:20:00.163 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:00.163 Verification LBA range: start 0xa0000 length 0xa0000 00:20:00.163 Nvme0n1 : 5.01 11291.79 44.11 0.00 0.00 11275.72 1359.37 16484.16 00:20:00.163 [2024-11-20T07:30:04.096Z] =================================================================================================================== 00:20:00.163 [2024-11-20T07:30:04.096Z] Total : 22517.26 87.96 0.00 0.00 11310.29 912.21 27473.61 00:20:01.545 00:20:01.545 real 0m7.487s 00:20:01.545 user 0m13.954s 00:20:01.545 sys 0m0.264s 00:20:01.545 07:30:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.545 ************************************ 00:20:01.545 END TEST bdev_verify 00:20:01.545 ************************************ 00:20:01.545 07:30:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 07:30:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:01.545 07:30:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:01.545 07:30:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.545 07:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.804 ************************************ 00:20:01.804 START TEST bdev_verify_big_io 00:20:01.804 ************************************ 00:20:01.804 07:30:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:01.804 [2024-11-20 07:30:05.534802] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:01.804 [2024-11-20 07:30:05.534923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77915 ] 00:20:01.804 [2024-11-20 07:30:05.709122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:02.064 [2024-11-20 07:30:05.841454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.064 [2024-11-20 07:30:05.841492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.634 Running I/O for 5 seconds... 00:20:04.970 1749.00 IOPS, 109.31 MiB/s [2024-11-20T07:30:09.845Z] 1854.50 IOPS, 115.91 MiB/s [2024-11-20T07:30:10.784Z] 1919.00 IOPS, 119.94 MiB/s [2024-11-20T07:30:11.722Z] 1951.25 IOPS, 121.95 MiB/s 00:20:07.789 Latency(us) 00:20:07.789 [2024-11-20T07:30:11.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.790 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.790 Verification LBA range: start 0x0 length 0xa000 00:20:07.790 Nvme0n1 : 5.06 985.16 61.57 0.00 0.00 127169.39 389.92 141031.18 00:20:07.790 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.790 Verification LBA range: start 0xa000 length 0xa000 00:20:07.790 Nvme0n1 : 5.06 981.76 61.36 0.00 0.00 127571.95 754.81 146525.90 00:20:07.790 [2024-11-20T07:30:11.723Z] =================================================================================================================== 00:20:07.790 [2024-11-20T07:30:11.723Z] Total : 1966.91 122.93 0.00 0.00 127370.31 389.92 146525.90 00:20:09.723 00:20:09.723 real 0m7.745s 00:20:09.723 user 0m14.504s 00:20:09.723 sys 0m0.248s 00:20:09.723 07:30:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.723 07:30:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.723 ************************************ 00:20:09.723 END TEST bdev_verify_big_io 00:20:09.723 ************************************ 00:20:09.723 07:30:13 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.723 07:30:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:09.723 07:30:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.723 07:30:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:09.723 ************************************ 00:20:09.723 START TEST bdev_write_zeroes 00:20:09.723 ************************************ 00:20:09.723 07:30:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.723 [2024-11-20 07:30:13.329704] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:09.723 [2024-11-20 07:30:13.329927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78013 ] 00:20:09.723 [2024-11-20 07:30:13.502342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.723 [2024-11-20 07:30:13.632747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.291 Running I/O for 1 seconds... 00:20:11.666 55881.00 IOPS, 218.29 MiB/s 00:20:11.666 Latency(us) 00:20:11.666 [2024-11-20T07:30:15.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.666 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:11.666 Nvme0n1 : 1.00 55847.15 218.15 0.00 0.00 2287.02 629.60 12477.60 00:20:11.666 [2024-11-20T07:30:15.599Z] =================================================================================================================== 00:20:11.666 [2024-11-20T07:30:15.599Z] Total : 55847.15 218.15 0.00 0.00 2287.02 629.60 12477.60 00:20:12.603 00:20:12.603 real 0m3.086s 00:20:12.603 user 0m2.749s 00:20:12.603 sys 0m0.236s 00:20:12.603 07:30:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.603 07:30:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:12.603 ************************************ 00:20:12.603 END TEST bdev_write_zeroes 00:20:12.603 ************************************ 00:20:12.603 07:30:16 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.603 07:30:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:12.603 07:30:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.603 07:30:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.603 ************************************ 00:20:12.603 START TEST bdev_json_nonenclosed 00:20:12.603 ************************************ 00:20:12.603 07:30:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.603 [2024-11-20 07:30:16.485413] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:12.603 [2024-11-20 07:30:16.485605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78061 ] 00:20:12.862 [2024-11-20 07:30:16.661348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.121 [2024-11-20 07:30:16.783730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.121 [2024-11-20 07:30:16.783899] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:13.121 [2024-11-20 07:30:16.783921] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:13.121 [2024-11-20 07:30:16.783930] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:13.381 00:20:13.381 real 0m0.631s 00:20:13.381 user 0m0.398s 00:20:13.381 sys 0m0.132s 00:20:13.381 07:30:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.381 07:30:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:13.381 ************************************ 00:20:13.381 END TEST bdev_json_nonenclosed 00:20:13.381 ************************************ 00:20:13.381 07:30:17 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:13.381 07:30:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:13.381 07:30:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.381 07:30:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.381 ************************************ 00:20:13.381 START TEST bdev_json_nonarray 00:20:13.381 ************************************ 00:20:13.381 07:30:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:13.381 [2024-11-20 07:30:17.184364] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:13.381 [2024-11-20 07:30:17.184601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78092 ] 00:20:13.648 [2024-11-20 07:30:17.359816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.648 [2024-11-20 07:30:17.507944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.648 [2024-11-20 07:30:17.508107] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:13.648 [2024-11-20 07:30:17.508133] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:13.648 [2024-11-20 07:30:17.508145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:13.921 00:20:13.921 real 0m0.694s 00:20:13.921 user 0m0.471s 00:20:13.921 sys 0m0.122s 00:20:13.921 ************************************ 00:20:13.921 END TEST bdev_json_nonarray 00:20:13.921 ************************************ 00:20:13.921 07:30:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.921 07:30:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:20:14.181 07:30:17 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:20:14.181 00:20:14.181 real 0m33.497s 00:20:14.181 user 0m50.193s 00:20:14.181 sys 0m3.872s 00:20:14.181 07:30:17 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.181 ************************************ 00:20:14.181 END TEST blockdev_nvme 00:20:14.181 ************************************ 00:20:14.181 07:30:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.181 07:30:17 -- spdk/autotest.sh@209 -- # uname -s 00:20:14.181 07:30:17 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:20:14.181 07:30:17 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:14.181 07:30:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:14.181 07:30:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.181 07:30:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.181 ************************************ 00:20:14.181 START TEST blockdev_nvme_gpt 00:20:14.181 ************************************ 00:20:14.181 07:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:14.181 * Looking for test storage... 00:20:14.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:14.181 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:14.181 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:20:14.181 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:14.440 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.440 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.441 07:30:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.441 --rc genhtml_branch_coverage=1 00:20:14.441 --rc genhtml_function_coverage=1 00:20:14.441 --rc genhtml_legend=1 00:20:14.441 --rc geninfo_all_blocks=1 00:20:14.441 --rc geninfo_unexecuted_blocks=1 00:20:14.441 00:20:14.441 ' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.441 --rc genhtml_branch_coverage=1 00:20:14.441 --rc genhtml_function_coverage=1 00:20:14.441 --rc genhtml_legend=1 00:20:14.441 --rc geninfo_all_blocks=1 00:20:14.441 --rc geninfo_unexecuted_blocks=1 00:20:14.441 00:20:14.441 ' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.441 --rc genhtml_branch_coverage=1 00:20:14.441 --rc genhtml_function_coverage=1 00:20:14.441 --rc genhtml_legend=1 00:20:14.441 --rc geninfo_all_blocks=1 00:20:14.441 --rc geninfo_unexecuted_blocks=1 00:20:14.441 00:20:14.441 ' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.441 --rc genhtml_branch_coverage=1 00:20:14.441 --rc genhtml_function_coverage=1 00:20:14.441 --rc genhtml_legend=1 00:20:14.441 --rc geninfo_all_blocks=1 00:20:14.441 --rc geninfo_unexecuted_blocks=1 00:20:14.441 00:20:14.441 ' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78180 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:14.441 07:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78180 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 78180 ']' 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.441 07:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:14.441 [2024-11-20 07:30:18.269228] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:14.441 [2024-11-20 07:30:18.269417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78180 ] 00:20:14.701 [2024-11-20 07:30:18.423464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.701 [2024-11-20 07:30:18.554645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.638 07:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.638 07:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:20:15.638 07:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:15.638 07:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:20:15.638 07:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:16.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:20:16.206 Waiting for block devices as requested 00:20:16.206 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:16.206 07:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:20:16.206 BYT; 00:20:16.206 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:20:16.206 BYT; 00:20:16.206 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:20:16.206 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:16.466 07:30:20 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:16.466 07:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:20:17.846 The operation has completed successfully. 00:20:17.846 07:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:20:18.785 The operation has completed successfully. 00:20:18.785 07:30:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:19.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:20:19.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 [] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:19.986 07:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 78180 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 78180 ']' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 78180 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78180 00:20:19.986 killing process with pid 78180 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78180' 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 78180 00:20:19.986 07:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 78180 00:20:22.560 07:30:26 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:22.560 07:30:26 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:20:22.560 07:30:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:22.560 07:30:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.560 07:30:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:22.560 ************************************ 00:20:22.560 START TEST bdev_hello_world 00:20:22.560 ************************************ 00:20:22.560 07:30:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:20:22.560 [2024-11-20 07:30:26.157918] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:22.560 [2024-11-20 07:30:26.158091] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78586 ] 00:20:22.560 [2024-11-20 07:30:26.345215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.560 [2024-11-20 07:30:26.462792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.130 [2024-11-20 07:30:26.921019] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:23.130 [2024-11-20 07:30:26.921071] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:20:23.130 [2024-11-20 07:30:26.921092] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:23.130 [2024-11-20 07:30:26.923612] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:23.130 [2024-11-20 07:30:26.924126] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:23.130 [2024-11-20 07:30:26.924166] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:23.130 [2024-11-20 07:30:26.924424] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:23.130 00:20:23.130 [2024-11-20 07:30:26.924462] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:24.512 ************************************ 00:20:24.512 END TEST bdev_hello_world 00:20:24.512 ************************************ 00:20:24.512 00:20:24.512 real 0m1.951s 00:20:24.512 user 0m1.628s 00:20:24.512 sys 0m0.223s 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 07:30:28 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:24.512 07:30:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:24.512 07:30:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.512 07:30:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 ************************************ 00:20:24.512 START TEST bdev_bounds 00:20:24.512 ************************************ 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=78628 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 78628' 00:20:24.512 Process bdevio pid: 78628 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 78628 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 78628 ']' 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.512 07:30:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 [2024-11-20 07:30:28.170247] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:24.512 [2024-11-20 07:30:28.170366] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78628 ] 00:20:24.512 [2024-11-20 07:30:28.326392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.771 [2024-11-20 07:30:28.445779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.771 [2024-11-20 07:30:28.445879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.771 [2024-11-20 07:30:28.445916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.340 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.340 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:25.340 07:30:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:25.340 I/O targets: 00:20:25.340 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:20:25.340 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:20:25.340 00:20:25.340 00:20:25.340 CUnit - A unit testing framework for C - Version 2.1-3 00:20:25.340 http://cunit.sourceforge.net/ 00:20:25.340 00:20:25.340 00:20:25.340 Suite: bdevio tests on: Nvme0n1p2 00:20:25.340 Test: blockdev write read block ...passed 00:20:25.340 Test: blockdev write zeroes read block ...passed 00:20:25.340 Test: blockdev write zeroes read no split ...passed 00:20:25.340 Test: blockdev write zeroes read split ...passed 00:20:25.340 Test: blockdev write zeroes read split partial ...passed 00:20:25.340 Test: blockdev reset ...[2024-11-20 07:30:29.170380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:20:25.340 [2024-11-20 07:30:29.174284] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:20:25.340 passed 00:20:25.340 Test: blockdev write read 8 blocks ...passed 00:20:25.340 Test: blockdev write read size > 128k ...passed 00:20:25.340 Test: blockdev write read invalid size ...passed 00:20:25.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:25.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:25.341 Test: blockdev write read max offset ...passed 00:20:25.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:25.341 Test: blockdev writev readv 8 blocks ...passed 00:20:25.341 Test: blockdev writev readv 30 x 1block ...passed 00:20:25.341 Test: blockdev writev readv block ...passed 00:20:25.341 Test: blockdev writev readv size > 128k ...passed 00:20:25.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:25.341 Test: blockdev comparev and writev ...[2024-11-20 07:30:29.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2a960d000 len:0x1000 00:20:25.341 [2024-11-20 07:30:29.183128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:25.341 passed 00:20:25.341 Test: blockdev nvme passthru rw ...passed 00:20:25.341 Test: blockdev nvme passthru vendor specific ...passed 00:20:25.341 Test: blockdev nvme admin passthru ...passed 00:20:25.341 Test: blockdev copy ...passed 00:20:25.341 Suite: bdevio tests on: Nvme0n1p1 00:20:25.341 Test: blockdev write read block ...passed 00:20:25.341 Test: blockdev write zeroes read block ...passed 00:20:25.341 Test: blockdev write zeroes read no split ...passed 00:20:25.341 Test: blockdev write zeroes read split ...passed 00:20:25.341 Test: blockdev write zeroes read split partial ...passed 00:20:25.341 Test: blockdev reset ...[2024-11-20 07:30:29.256747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:20:25.600 [2024-11-20 07:30:29.260520] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:20:25.600 passed 00:20:25.600 Test: blockdev write read 8 blocks ...passed 00:20:25.600 Test: blockdev write read size > 128k ...passed 00:20:25.600 Test: blockdev write read invalid size ...passed 00:20:25.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:25.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:25.600 Test: blockdev write read max offset ...passed 00:20:25.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:25.600 Test: blockdev writev readv 8 blocks ...passed 00:20:25.600 Test: blockdev writev readv 30 x 1block ...passed 00:20:25.600 Test: blockdev writev readv block ...passed 00:20:25.600 Test: blockdev writev readv size > 128k ...passed 00:20:25.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:25.600 Test: blockdev comparev and writev ...[2024-11-20 07:30:29.269036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2a9609000 len:0x1000 00:20:25.600 [2024-11-20 07:30:29.269083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:25.600 passed 00:20:25.600 Test: blockdev nvme passthru rw ...passed 00:20:25.600 Test: blockdev nvme passthru vendor specific ...passed 00:20:25.600 Test: blockdev nvme admin passthru ...passed 00:20:25.600 Test: blockdev copy ...passed 00:20:25.600 00:20:25.600 Run Summary: Type Total Ran Passed Failed Inactive 00:20:25.600 suites 2 2 n/a 0 0 00:20:25.600 tests 46 46 46 0 0 00:20:25.600 asserts 284 284 284 0 n/a 00:20:25.600 00:20:25.600 Elapsed time = 0.481 seconds 00:20:25.600 0 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 78628 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 78628 ']' 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 78628 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78628 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.600 killing process with pid 78628 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78628' 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 78628 00:20:25.600 07:30:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 78628 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:26.981 00:20:26.981 real 0m2.375s 00:20:26.981 user 0m5.964s 00:20:26.981 sys 0m0.338s 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:26.981 ************************************ 00:20:26.981 END TEST bdev_bounds 00:20:26.981 ************************************ 00:20:26.981 07:30:30 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:20:26.981 07:30:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:26.981 07:30:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.981 07:30:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:26.981 ************************************ 00:20:26.981 START TEST bdev_nbd 00:20:26.981 ************************************ 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=78682 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 78682 /var/tmp/spdk-nbd.sock 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 78682 ']' 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.981 07:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:26.981 [2024-11-20 07:30:30.612921] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:26.981 [2024-11-20 07:30:30.613051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.981 [2024-11-20 07:30:30.771554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.981 [2024-11-20 07:30:30.887653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:20:27.550 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.809 1+0 records in 00:20:27.809 1+0 records out 00:20:27.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414555 s, 9.9 MB/s 00:20:27.809 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:20:27.810 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.069 1+0 records in 00:20:28.069 1+0 records out 00:20:28.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049539 s, 8.3 MB/s 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:20:28.069 07:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:28.328 { 00:20:28.328 "nbd_device": "/dev/nbd0", 00:20:28.328 "bdev_name": "Nvme0n1p1" 00:20:28.328 }, 00:20:28.328 { 00:20:28.328 "nbd_device": "/dev/nbd1", 00:20:28.328 "bdev_name": "Nvme0n1p2" 00:20:28.328 } 00:20:28.328 ]' 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:28.328 { 00:20:28.328 "nbd_device": "/dev/nbd0", 00:20:28.328 "bdev_name": "Nvme0n1p1" 00:20:28.328 }, 00:20:28.328 { 00:20:28.328 "nbd_device": "/dev/nbd1", 00:20:28.328 "bdev_name": "Nvme0n1p2" 00:20:28.328 } 00:20:28.328 ]' 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.328 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.588 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:28.847 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:20:29.106 /dev/nbd0 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.106 1+0 records in 00:20:29.106 1+0 records out 00:20:29.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716181 s, 5.7 MB/s 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.106 07:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:20:29.366 /dev/nbd1 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.366 1+0 records in 00:20:29.366 1+0 records out 00:20:29.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534657 s, 7.7 MB/s 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:29.366 { 00:20:29.366 "nbd_device": "/dev/nbd0", 00:20:29.366 "bdev_name": "Nvme0n1p1" 00:20:29.366 }, 00:20:29.366 { 00:20:29.366 "nbd_device": "/dev/nbd1", 00:20:29.366 "bdev_name": "Nvme0n1p2" 00:20:29.366 } 00:20:29.366 ]' 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:29.366 { 00:20:29.366 "nbd_device": "/dev/nbd0", 00:20:29.366 "bdev_name": "Nvme0n1p1" 00:20:29.366 }, 00:20:29.366 { 00:20:29.366 "nbd_device": "/dev/nbd1", 00:20:29.366 "bdev_name": "Nvme0n1p2" 00:20:29.366 } 00:20:29.366 ]' 00:20:29.366 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:29.626 /dev/nbd1' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:29.626 /dev/nbd1' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:29.626 256+0 records in 00:20:29.626 256+0 records out 00:20:29.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122258 s, 85.8 MB/s 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:29.626 256+0 records in 00:20:29.626 256+0 records out 00:20:29.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821042 s, 12.8 MB/s 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:29.626 256+0 records in 00:20:29.626 256+0 records out 00:20:29.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0828592 s, 12.7 MB/s 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.626 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.892 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.161 07:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:30.421 malloc_lvol_verify 00:20:30.421 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:30.681 c361245c-0e22-48e8-8a07-0c9c61fefe8f 00:20:30.681 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:30.941 3649f6a3-be35-414d-bc19-616b314ffc90 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:30.941 /dev/nbd0 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:30.941 mke2fs 1.47.0 (5-Feb-2023) 00:20:30.941 00:20:30.941 Filesystem too small for a journal 00:20:30.941 Discarding device blocks: 0/1024 done 00:20:30.941 Creating filesystem with 1024 4k blocks and 1024 inodes 00:20:30.941 00:20:30.941 Allocating group tables: 0/1 done 00:20:30.941 Writing inode tables: 0/1 done 00:20:30.941 Writing superblocks and filesystem accounting information: 0/1 done 00:20:30.941 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.941 07:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 78682 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 78682 ']' 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 78682 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78682 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.201 killing process with pid 78682 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78682' 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 78682 00:20:31.201 07:30:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 78682 00:20:32.585 07:30:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:32.585 00:20:32.585 real 0m5.784s 00:20:32.585 user 0m7.864s 00:20:32.585 sys 0m1.483s 00:20:32.585 07:30:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.585 07:30:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 ************************************ 00:20:32.585 END TEST bdev_nbd 00:20:32.585 ************************************ 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:20:32.585 skipping fio tests on NVMe due to multi-ns failures. 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:32.585 07:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:32.585 07:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:32.585 07:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.585 07:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 ************************************ 00:20:32.585 START TEST bdev_verify 00:20:32.585 ************************************ 00:20:32.585 07:30:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:32.585 [2024-11-20 07:30:36.455056] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:32.585 [2024-11-20 07:30:36.455167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78914 ] 00:20:32.845 [2024-11-20 07:30:36.631287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:32.845 [2024-11-20 07:30:36.762079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.845 [2024-11-20 07:30:36.762118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.414 Running I/O for 5 seconds... 00:20:35.734 17344.00 IOPS, 67.75 MiB/s [2024-11-20T07:30:40.605Z] 16256.00 IOPS, 63.50 MiB/s [2024-11-20T07:30:41.544Z] 15850.67 IOPS, 61.92 MiB/s [2024-11-20T07:30:42.487Z] 16288.00 IOPS, 63.62 MiB/s [2024-11-20T07:30:42.487Z] 16064.00 IOPS, 62.75 MiB/s 00:20:38.554 Latency(us) 00:20:38.554 [2024-11-20T07:30:42.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.554 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:38.554 Verification LBA range: start 0x0 length 0x4ff80 00:20:38.554 Nvme0n1p1 : 5.02 4384.28 17.13 0.00 0.00 29108.82 5494.72 40752.52 00:20:38.554 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:38.554 Verification LBA range: start 0x4ff80 length 0x4ff80 00:20:38.554 Nvme0n1p1 : 5.03 3638.49 14.21 0.00 0.00 35081.91 4636.17 44873.56 00:20:38.554 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:38.554 Verification LBA range: start 0x0 length 0x4ff7f 00:20:38.554 Nvme0n1p2 : 5.02 4383.01 17.12 0.00 0.00 29061.53 5380.25 39378.84 00:20:38.554 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:38.554 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:20:38.554 Nvme0n1p2 : 5.03 3637.51 14.21 0.00 0.00 35023.67 4550.32 45102.50 00:20:38.554 [2024-11-20T07:30:42.487Z] =================================================================================================================== 00:20:38.554 [2024-11-20T07:30:42.487Z] Total : 16043.30 62.67 0.00 0.00 31794.28 4550.32 45102.50 00:20:39.935 00:20:39.935 real 0m7.454s 00:20:39.935 user 0m13.869s 00:20:39.935 sys 0m0.264s 00:20:39.935 07:30:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.935 07:30:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:39.935 ************************************ 00:20:39.935 END TEST bdev_verify 00:20:39.935 ************************************ 00:20:40.195 07:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:40.195 07:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:40.195 07:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.195 07:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:40.195 ************************************ 00:20:40.195 START TEST bdev_verify_big_io 00:20:40.195 ************************************ 00:20:40.195 07:30:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:40.195 [2024-11-20 07:30:43.969474] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:40.195 [2024-11-20 07:30:43.969607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79011 ] 00:20:40.455 [2024-11-20 07:30:44.145178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:40.455 [2024-11-20 07:30:44.297396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.455 [2024-11-20 07:30:44.297426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.024 Running I/O for 5 seconds... 00:20:43.346 1725.00 IOPS, 107.81 MiB/s [2024-11-20T07:30:48.660Z] 1851.00 IOPS, 115.69 MiB/s [2024-11-20T07:30:49.599Z] 1981.00 IOPS, 123.81 MiB/s [2024-11-20T07:30:50.169Z] 2000.00 IOPS, 125.00 MiB/s [2024-11-20T07:30:50.169Z] 2005.80 IOPS, 125.36 MiB/s 00:20:46.236 Latency(us) 00:20:46.236 [2024-11-20T07:30:50.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.236 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:46.236 Verification LBA range: start 0x0 length 0x4ff8 00:20:46.236 Nvme0n1p1 : 5.17 692.64 43.29 0.00 0.00 182557.07 6467.74 178578.45 00:20:46.236 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:46.236 Verification LBA range: start 0x4ff8 length 0x4ff8 00:20:46.236 Nvme0n1p1 : 5.21 344.00 21.50 0.00 0.00 362617.32 5609.19 377304.20 00:20:46.236 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:46.236 Verification LBA range: start 0x0 length 0x4ff7 00:20:46.236 Nvme0n1p2 : 5.18 691.11 43.19 0.00 0.00 179867.07 3977.95 184988.95 00:20:46.236 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:46.236 Verification LBA range: start 0x4ff7 length 0x4ff7 00:20:46.236 Nvme0n1p2 : 5.21 342.68 21.42 0.00 0.00 353173.12 3520.06 377304.20 00:20:46.236 [2024-11-20T07:30:50.169Z] =================================================================================================================== 00:20:46.236 [2024-11-20T07:30:50.169Z] Total : 2070.44 129.40 0.00 0.00 240076.94 3520.06 377304.20 00:20:48.779 00:20:48.779 real 0m8.315s 00:20:48.779 user 0m15.459s 00:20:48.779 sys 0m0.369s 00:20:48.779 07:30:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.779 07:30:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.780 ************************************ 00:20:48.780 END TEST bdev_verify_big_io 00:20:48.780 ************************************ 00:20:48.780 07:30:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.780 07:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:48.780 07:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.780 07:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:48.780 ************************************ 00:20:48.780 START TEST bdev_write_zeroes 00:20:48.780 ************************************ 00:20:48.780 07:30:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.780 [2024-11-20 07:30:52.355075] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:48.780 [2024-11-20 07:30:52.355203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79116 ] 00:20:48.780 [2024-11-20 07:30:52.531741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.780 [2024-11-20 07:30:52.684457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.717 Running I/O for 1 seconds... 00:20:50.653 54623.00 IOPS, 213.37 MiB/s 00:20:50.653 Latency(us) 00:20:50.653 [2024-11-20T07:30:54.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.653 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:50.653 Nvme0n1p1 : 1.01 27376.91 106.94 0.00 0.00 4666.22 2647.20 17628.90 00:20:50.653 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:50.653 Nvme0n1p2 : 1.01 27191.07 106.22 0.00 0.00 4692.20 2461.18 24268.35 00:20:50.653 [2024-11-20T07:30:54.586Z] =================================================================================================================== 00:20:50.653 [2024-11-20T07:30:54.586Z] Total : 54567.97 213.16 0.00 0.00 4679.17 2461.18 24268.35 00:20:51.593 00:20:51.593 real 0m3.102s 00:20:51.593 user 0m2.653s 00:20:51.593 sys 0m0.348s 00:20:51.593 07:30:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.593 07:30:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:51.593 ************************************ 00:20:51.593 END TEST bdev_write_zeroes 00:20:51.593 ************************************ 00:20:51.593 07:30:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:51.593 07:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:51.593 07:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.593 07:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:51.593 ************************************ 00:20:51.593 START TEST bdev_json_nonenclosed 00:20:51.593 ************************************ 00:20:51.593 07:30:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:51.593 [2024-11-20 07:30:55.510103] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:51.593 [2024-11-20 07:30:55.510220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79169 ] 00:20:51.853 [2024-11-20 07:30:55.671480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.113 [2024-11-20 07:30:55.796483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.113 [2024-11-20 07:30:55.796594] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:52.113 [2024-11-20 07:30:55.796611] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:52.113 [2024-11-20 07:30:55.796622] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:52.373 00:20:52.373 real 0m0.600s 00:20:52.373 user 0m0.395s 00:20:52.373 sys 0m0.105s 00:20:52.373 07:30:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.373 07:30:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:52.373 ************************************ 00:20:52.373 END TEST bdev_json_nonenclosed 00:20:52.373 ************************************ 00:20:52.373 07:30:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:52.373 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:52.373 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.373 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:52.373 ************************************ 00:20:52.373 START TEST bdev_json_nonarray 00:20:52.373 ************************************ 00:20:52.373 07:30:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:52.373 [2024-11-20 07:30:56.174002] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:52.373 [2024-11-20 07:30:56.174133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79195 ] 00:20:52.632 [2024-11-20 07:30:56.347942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.632 [2024-11-20 07:30:56.477142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.632 [2024-11-20 07:30:56.477256] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:52.632 [2024-11-20 07:30:56.477276] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:52.633 [2024-11-20 07:30:56.477286] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:52.892 00:20:52.892 real 0m0.606s 00:20:52.892 user 0m0.409s 00:20:52.892 sys 0m0.096s 00:20:52.892 07:30:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.892 07:30:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:52.892 ************************************ 00:20:52.892 END TEST bdev_json_nonarray 00:20:52.892 ************************************ 00:20:52.893 07:30:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:20:52.893 07:30:56 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:20:52.893 07:30:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:20:52.893 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.893 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.893 07:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:52.893 ************************************ 00:20:52.893 START TEST bdev_gpt_uuid 00:20:52.893 ************************************ 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79226 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79226 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 79226 ']' 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.893 07:30:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:53.152 [2024-11-20 07:30:56.853618] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:53.152 [2024-11-20 07:30:56.853777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79226 ] 00:20:53.152 [2024-11-20 07:30:57.008378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.412 [2024-11-20 07:30:57.124632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.352 07:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.352 07:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:20:54.352 07:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:54.352 07:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.352 07:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:54.352 Some configs were skipped because the RPC state that can call them passed over. 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:20:54.352 { 00:20:54.352 "name": "Nvme0n1p1", 00:20:54.352 "aliases": [ 00:20:54.352 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:20:54.352 ], 00:20:54.352 "product_name": "GPT Disk", 00:20:54.352 "block_size": 4096, 00:20:54.352 "num_blocks": 655104, 00:20:54.352 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:54.352 "assigned_rate_limits": { 00:20:54.352 "rw_ios_per_sec": 0, 00:20:54.352 "rw_mbytes_per_sec": 0, 00:20:54.352 "r_mbytes_per_sec": 0, 00:20:54.352 "w_mbytes_per_sec": 0 00:20:54.352 }, 00:20:54.352 "claimed": false, 00:20:54.352 "zoned": false, 00:20:54.352 "supported_io_types": { 00:20:54.352 "read": true, 00:20:54.352 "write": true, 00:20:54.352 "unmap": true, 00:20:54.352 "flush": true, 00:20:54.352 "reset": true, 00:20:54.352 "nvme_admin": false, 00:20:54.352 "nvme_io": false, 00:20:54.352 "nvme_io_md": false, 00:20:54.352 "write_zeroes": true, 00:20:54.352 "zcopy": false, 00:20:54.352 "get_zone_info": false, 00:20:54.352 "zone_management": false, 00:20:54.352 "zone_append": false, 00:20:54.352 "compare": true, 00:20:54.352 "compare_and_write": false, 00:20:54.352 "abort": true, 00:20:54.352 "seek_hole": false, 00:20:54.352 "seek_data": false, 00:20:54.352 "copy": true, 00:20:54.352 "nvme_iov_md": false 00:20:54.352 }, 00:20:54.352 "driver_specific": { 00:20:54.352 "gpt": { 00:20:54.352 "base_bdev": "Nvme0n1", 00:20:54.352 "offset_blocks": 256, 00:20:54.352 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:20:54.352 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:54.352 "partition_name": "SPDK_TEST_first" 00:20:54.352 } 00:20:54.352 } 00:20:54.352 } 00:20:54.352 ]' 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.352 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:20:54.352 { 00:20:54.352 "name": "Nvme0n1p2", 00:20:54.352 "aliases": [ 00:20:54.352 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:20:54.352 ], 00:20:54.352 "product_name": "GPT Disk", 00:20:54.352 "block_size": 4096, 00:20:54.352 "num_blocks": 655103, 00:20:54.352 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:54.352 "assigned_rate_limits": { 00:20:54.352 "rw_ios_per_sec": 0, 00:20:54.352 "rw_mbytes_per_sec": 0, 00:20:54.352 "r_mbytes_per_sec": 0, 00:20:54.352 "w_mbytes_per_sec": 0 00:20:54.352 }, 00:20:54.352 "claimed": false, 00:20:54.352 "zoned": false, 00:20:54.352 "supported_io_types": { 00:20:54.352 "read": true, 00:20:54.352 "write": true, 00:20:54.352 "unmap": true, 00:20:54.352 "flush": true, 00:20:54.352 "reset": true, 00:20:54.352 "nvme_admin": false, 00:20:54.353 "nvme_io": false, 00:20:54.353 "nvme_io_md": false, 00:20:54.353 "write_zeroes": true, 00:20:54.353 "zcopy": false, 00:20:54.353 "get_zone_info": false, 00:20:54.353 "zone_management": false, 00:20:54.353 "zone_append": false, 00:20:54.353 "compare": true, 00:20:54.353 "compare_and_write": false, 00:20:54.353 "abort": true, 00:20:54.353 "seek_hole": false, 00:20:54.353 "seek_data": false, 00:20:54.353 "copy": true, 00:20:54.353 "nvme_iov_md": false 00:20:54.353 }, 00:20:54.353 "driver_specific": { 00:20:54.353 "gpt": { 00:20:54.353 "base_bdev": "Nvme0n1", 00:20:54.353 "offset_blocks": 655360, 00:20:54.353 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:20:54.353 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:54.353 "partition_name": "SPDK_TEST_second" 00:20:54.353 } 00:20:54.353 } 00:20:54.353 } 00:20:54.353 ]' 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 79226 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 79226 ']' 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 79226 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.353 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79226 00:20:54.613 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.613 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.613 killing process with pid 79226 00:20:54.613 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79226' 00:20:54.613 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 79226 00:20:54.613 07:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 79226 00:20:56.545 00:20:56.545 real 0m3.670s 00:20:56.545 user 0m3.570s 00:20:56.545 sys 0m0.517s 00:20:56.545 07:31:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.545 07:31:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:56.545 ************************************ 00:20:56.545 END TEST bdev_gpt_uuid 00:20:56.545 ************************************ 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:20:56.805 07:31:00 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:57.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:20:57.066 Waiting for block devices as requested 00:20:57.326 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.326 07:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:20:57.326 07:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:20:57.586 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:20:57.586 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:20:57.586 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:20:57.586 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:20:57.586 07:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:20:57.586 00:20:57.586 real 0m43.423s 00:20:57.586 user 1m0.539s 00:20:57.586 sys 0m6.896s 00:20:57.586 07:31:01 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.586 07:31:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:57.586 ************************************ 00:20:57.586 END TEST blockdev_nvme_gpt 00:20:57.586 ************************************ 00:20:57.586 07:31:01 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:20:57.586 07:31:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.586 07:31:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.586 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:57.586 ************************************ 00:20:57.586 START TEST nvme 00:20:57.586 ************************************ 00:20:57.586 07:31:01 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:20:57.846 * Looking for test storage... 00:20:57.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.846 07:31:01 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.846 07:31:01 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.846 07:31:01 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.846 07:31:01 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.846 07:31:01 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.846 07:31:01 nvme -- scripts/common.sh@344 -- # case "$op" in 00:20:57.846 07:31:01 nvme -- scripts/common.sh@345 -- # : 1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.846 07:31:01 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.846 07:31:01 nvme -- scripts/common.sh@365 -- # decimal 1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@353 -- # local d=1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.846 07:31:01 nvme -- scripts/common.sh@355 -- # echo 1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.846 07:31:01 nvme -- scripts/common.sh@366 -- # decimal 2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@353 -- # local d=2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.846 07:31:01 nvme -- scripts/common.sh@355 -- # echo 2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.846 07:31:01 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.846 07:31:01 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.846 07:31:01 nvme -- scripts/common.sh@368 -- # return 0 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.846 --rc genhtml_branch_coverage=1 00:20:57.846 --rc genhtml_function_coverage=1 00:20:57.846 --rc genhtml_legend=1 00:20:57.846 --rc geninfo_all_blocks=1 00:20:57.846 --rc geninfo_unexecuted_blocks=1 00:20:57.846 00:20:57.846 ' 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.846 --rc genhtml_branch_coverage=1 00:20:57.846 --rc genhtml_function_coverage=1 00:20:57.846 --rc genhtml_legend=1 00:20:57.846 --rc geninfo_all_blocks=1 00:20:57.846 --rc geninfo_unexecuted_blocks=1 00:20:57.846 00:20:57.846 ' 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.846 --rc genhtml_branch_coverage=1 00:20:57.846 --rc genhtml_function_coverage=1 00:20:57.846 --rc genhtml_legend=1 00:20:57.846 --rc geninfo_all_blocks=1 00:20:57.846 --rc geninfo_unexecuted_blocks=1 00:20:57.846 00:20:57.846 ' 00:20:57.846 07:31:01 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.846 --rc genhtml_branch_coverage=1 00:20:57.846 --rc genhtml_function_coverage=1 00:20:57.846 --rc genhtml_legend=1 00:20:57.846 --rc geninfo_all_blocks=1 00:20:57.846 --rc geninfo_unexecuted_blocks=1 00:20:57.846 00:20:57.846 ' 00:20:57.846 07:31:01 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:58.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:20:58.416 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:59.356 07:31:03 nvme -- nvme/nvme.sh@79 -- # uname 00:20:59.356 07:31:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:20:59.356 07:31:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:20:59.357 07:31:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1075 -- # stubpid=79611 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:20:59.357 Waiting for stub to ready for secondary processes... 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/79611 ]] 00:20:59.357 07:31:03 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:20:59.357 [2024-11-20 07:31:03.098314] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:20:59.357 [2024-11-20 07:31:03.098428] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:21:00.296 07:31:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:00.296 07:31:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/79611 ]] 00:21:00.296 07:31:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:21:00.296 [2024-11-20 07:31:04.099988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.556 [2024-11-20 07:31:04.217211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.556 [2024-11-20 07:31:04.217354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.556 [2024-11-20 07:31:04.217400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.556 [2024-11-20 07:31:04.226293] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:21:00.556 [2024-11-20 07:31:04.226326] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:00.556 [2024-11-20 07:31:04.238749] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:21:00.556 [2024-11-20 07:31:04.239147] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:21:01.496 done. 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:21:01.496 07:31:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:01.496 ************************************ 00:21:01.496 START TEST nvme_reset 00:21:01.496 ************************************ 00:21:01.496 07:31:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:01.496 Initializing NVMe Controllers 00:21:01.496 Skipping QEMU NVMe SSD at 0000:00:10.0 00:21:01.496 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:21:01.496 00:21:01.496 real 0m0.261s 00:21:01.496 user 0m0.087s 00:21:01.496 sys 0m0.133s 00:21:01.496 07:31:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.496 07:31:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:21:01.496 ************************************ 00:21:01.496 END TEST nvme_reset 00:21:01.496 ************************************ 00:21:01.496 07:31:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.496 07:31:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:01.496 ************************************ 00:21:01.496 START TEST nvme_identify 00:21:01.496 ************************************ 00:21:01.496 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:21:01.496 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:21:01.496 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:21:01.496 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:21:01.756 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:21:01.756 07:31:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:21:01.756 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:21:02.015 ===================================================== 00:21:02.015 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:02.015 ===================================================== 00:21:02.015 Controller Capabilities/Features 00:21:02.015 ================================ 00:21:02.015 Vendor ID: 1b36 00:21:02.015 Subsystem Vendor ID: 1af4 00:21:02.015 Serial Number: 12340 00:21:02.015 Model Number: QEMU NVMe Ctrl 00:21:02.015 Firmware Version: 8.0.0 00:21:02.015 Recommended Arb Burst: 6 00:21:02.015 IEEE OUI Identifier: 00 54 52 00:21:02.015 Multi-path I/O 00:21:02.015 May have multiple subsystem ports: No 00:21:02.015 May have multiple controllers: No 00:21:02.015 Associated with SR-IOV VF: No 00:21:02.015 Max Data Transfer Size: 524288 00:21:02.015 Max Number of Namespaces: 256 00:21:02.015 Max Number of I/O Queues: 64 00:21:02.015 NVMe Specification Version (VS): 1.4 00:21:02.015 NVMe Specification Version (Identify): 1.4 00:21:02.015 Maximum Queue Entries: 2048 00:21:02.015 Contiguous Queues Required: Yes 00:21:02.015 Arbitration Mechanisms Supported 00:21:02.015 Weighted Round Robin: Not Supported 00:21:02.015 Vendor Specific: Not Supported 00:21:02.015 Reset Timeout: 7500 ms 00:21:02.015 Doorbell Stride: 4 bytes 00:21:02.015 NVM Subsystem Reset: Not Supported 00:21:02.015 Command Sets Supported 00:21:02.015 NVM Command Set: Supported 00:21:02.015 Boot Partition: Not Supported 00:21:02.015 Memory Page Size Minimum: 4096 bytes 00:21:02.015 Memory Page Size Maximum: 65536 bytes 00:21:02.015 Persistent Memory Region: Not Supported 00:21:02.015 Optional Asynchronous Events Supported 00:21:02.015 Namespace Attribute Notices: Supported 00:21:02.015 Firmware Activation Notices: Not Supported 00:21:02.015 ANA Change Notices: Not Supported 00:21:02.015 PLE Aggregate Log Change Notices: Not Supported 00:21:02.015 LBA Status Info Alert Notices: Not Supported 00:21:02.015 EGE Aggregate Log Change Notices: Not Supported 00:21:02.015 Normal NVM Subsystem Shutdown event: Not Supported 00:21:02.015 Zone Descriptor Change Notices: Not Supported 00:21:02.015 Discovery Log Change Notices: Not Supported 00:21:02.015 Controller Attributes 00:21:02.015 128-bit Host Identifier: Not Supported 00:21:02.015 Non-Operational Permissive Mode: Not Supported 00:21:02.016 NVM Sets: Not Supported 00:21:02.016 Read Recovery Levels: Not Supported 00:21:02.016 Endurance Groups: Not Supported 00:21:02.016 Predictable Latency Mode: Not Supported 00:21:02.016 Traffic Based Keep ALive: Not Supported 00:21:02.016 Namespace Granularity: Not Supported 00:21:02.016 SQ Associations: Not Supported 00:21:02.016 UUID List: Not Supported 00:21:02.016 Multi-Domain Subsystem: Not Supported 00:21:02.016 Fixed Capacity Management: Not Supported 00:21:02.016 Variable Capacity Management: Not Supported 00:21:02.016 Delete Endurance Group: Not Supported 00:21:02.016 Delete NVM Set: Not Supported 00:21:02.016 Extended LBA Formats Supported: Supported 00:21:02.016 Flexible Data Placement Supported: Not Supported 00:21:02.016 00:21:02.016 Controller Memory Buffer Support 00:21:02.016 ================================ 00:21:02.016 Supported: No 00:21:02.016 00:21:02.016 Persistent Memory Region Support 00:21:02.016 ================================ 00:21:02.016 Supported: No 00:21:02.016 00:21:02.016 Admin Command Set Attributes 00:21:02.016 ============================ 00:21:02.016 Security Send/Receive: Not Supported 00:21:02.016 Format NVM: Supported 00:21:02.016 Firmware Activate/Download: Not Supported 00:21:02.016 Namespace Management: Supported 00:21:02.016 Device Self-Test: Not Supported 00:21:02.016 Directives: Supported 00:21:02.016 NVMe-MI: Not Supported 00:21:02.016 Virtualization Management: Not Supported 00:21:02.016 Doorbell Buffer Config: Supported 00:21:02.016 Get LBA Status Capability: Not Supported 00:21:02.016 Command & Feature Lockdown Capability: Not Supported 00:21:02.016 Abort Command Limit: 4 00:21:02.016 Async Event Request Limit: 4 00:21:02.016 Number of Firmware Slots: N/A 00:21:02.016 Firmware Slot 1 Read-Only: N/A 00:21:02.016 Firmware Activation Without Reset: N/A 00:21:02.016 Multiple Update Detection Support: N/A 00:21:02.016 Firmware Update Granularity: No Information Provided 00:21:02.016 Per-Namespace SMART Log: Yes 00:21:02.016 Asymmetric Namespace Access Log Page: Not Supported 00:21:02.016 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:02.016 Command Effects Log Page: Supported 00:21:02.016 Get Log Page Extended Data: Supported 00:21:02.016 Telemetry Log Pages: Not Supported 00:21:02.016 Persistent Event Log Pages: Not Supported 00:21:02.016 Supported Log Pages Log Page: May Support 00:21:02.016 Commands Supported & Effects Log Page: Not Supported 00:21:02.016 Feature Identifiers & Effects Log Page:May Support 00:21:02.016 NVMe-MI Commands & Effects Log Page: May Support 00:21:02.016 Data Area 4 for Telemetry Log: Not Supported 00:21:02.016 Error Log Page Entries Supported: 1 00:21:02.016 Keep Alive: Not Supported 00:21:02.016 00:21:02.016 NVM Command Set Attributes 00:21:02.016 ========================== 00:21:02.016 Submission Queue Entry Size 00:21:02.016 Max: 64 00:21:02.016 Min: 64 00:21:02.016 Completion Queue Entry Size 00:21:02.016 Max: 16 00:21:02.016 Min: 16 00:21:02.016 Number of Namespaces: 256 00:21:02.016 Compare Command: Supported 00:21:02.016 Write Uncorrectable Command: Not Supported 00:21:02.016 Dataset Management Command: Supported 00:21:02.016 Write Zeroes Command: Supported 00:21:02.016 Set Features Save Field: Supported 00:21:02.016 Reservations: Not Supported 00:21:02.016 Timestamp: Supported 00:21:02.016 Copy: Supported 00:21:02.016 Volatile Write Cache: Present 00:21:02.016 Atomic Write Unit (Normal): 1 00:21:02.016 Atomic Write Unit (PFail): 1 00:21:02.016 Atomic Compare & Write Unit: 1 00:21:02.016 Fused Compare & Write: Not Supported 00:21:02.016 Scatter-Gather List 00:21:02.016 SGL Command Set: Supported 00:21:02.016 SGL Keyed: Not Supported 00:21:02.016 SGL Bit Bucket Descriptor: Not Supported 00:21:02.016 SGL Metadata Pointer: Not Supported 00:21:02.016 Oversized SGL: Not Supported 00:21:02.016 SGL Metadata Address: Not Supported 00:21:02.016 SGL Offset: Not Supported 00:21:02.016 Transport SGL Data Block: Not Supported 00:21:02.016 Replay Protected Memory Block: Not Supported 00:21:02.016 00:21:02.016 Firmware Slot Information 00:21:02.016 ========================= 00:21:02.016 Active slot: 1 00:21:02.016 Slot 1 Firmware Revision: 1.0 00:21:02.016 00:21:02.016 00:21:02.016 Commands Supported and Effects 00:21:02.016 ============================== 00:21:02.016 Admin Commands 00:21:02.016 -------------- 00:21:02.016 Delete I/O Submission Queue (00h): Supported 00:21:02.016 Create I/O Submission Queue (01h): Supported 00:21:02.016 Get Log Page (02h): Supported 00:21:02.016 Delete I/O Completion Queue (04h): Supported 00:21:02.016 Create I/O Completion Queue (05h): Supported 00:21:02.016 Identify (06h): Supported 00:21:02.016 Abort (08h): Supported 00:21:02.016 Set Features (09h): Supported 00:21:02.016 Get Features (0Ah): Supported 00:21:02.016 Asynchronous Event Request (0Ch): Supported 00:21:02.016 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:02.016 Directive Send (19h): Supported 00:21:02.016 Directive Receive (1Ah): Supported 00:21:02.016 Virtualization Management (1Ch): Supported 00:21:02.016 Doorbell Buffer Config (7Ch): Supported 00:21:02.016 Format NVM (80h): Supported LBA-Change 00:21:02.016 I/O Commands 00:21:02.016 ------------ 00:21:02.016 Flush (00h): Supported LBA-Change 00:21:02.016 Write (01h): Supported LBA-Change 00:21:02.016 Read (02h): Supported 00:21:02.016 Compare (05h): Supported 00:21:02.016 Write Zeroes (08h): Supported LBA-Change 00:21:02.016 Dataset Management (09h): Supported LBA-Change 00:21:02.016 Unknown (0Ch): Supported 00:21:02.016 Unknown (12h): Supported 00:21:02.016 Copy (19h): Supported LBA-Change 00:21:02.016 Unknown (1Dh): Supported LBA-Change 00:21:02.016 00:21:02.016 Error Log 00:21:02.016 ========= 00:21:02.016 00:21:02.016 Arbitration 00:21:02.016 =========== 00:21:02.016 Arbitration Burst: no limit 00:21:02.016 00:21:02.016 Power Management 00:21:02.016 ================ 00:21:02.016 Number of Power States: 1 00:21:02.016 Current Power State: Power State #0 00:21:02.016 Power State #0: 00:21:02.016 Max Power: 25.00 W 00:21:02.016 Non-Operational State: Operational 00:21:02.016 Entry Latency: 16 microseconds 00:21:02.016 Exit Latency: 4 microseconds 00:21:02.016 Relative Read Throughput: 0 00:21:02.016 Relative Read Latency: 0 00:21:02.016 Relative Write Throughput: 0 00:21:02.016 Relative Write Latency: 0 00:21:02.016 Idle Power[2024-11-20 07:31:05.726571] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 79645 terminated unexpected 00:21:02.016 : Not Reported 00:21:02.016 Active Power: Not Reported 00:21:02.016 Non-Operational Permissive Mode: Not Supported 00:21:02.016 00:21:02.016 Health Information 00:21:02.016 ================== 00:21:02.016 Critical Warnings: 00:21:02.016 Available Spare Space: OK 00:21:02.016 Temperature: OK 00:21:02.016 Device Reliability: OK 00:21:02.016 Read Only: No 00:21:02.016 Volatile Memory Backup: OK 00:21:02.016 Current Temperature: 323 Kelvin (50 Celsius) 00:21:02.016 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:02.016 Available Spare: 0% 00:21:02.016 Available Spare Threshold: 0% 00:21:02.016 Life Percentage Used: 0% 00:21:02.016 Data Units Read: 4635 00:21:02.016 Data Units Written: 4365 00:21:02.016 Host Read Commands: 217038 00:21:02.016 Host Write Commands: 231470 00:21:02.016 Controller Busy Time: 0 minutes 00:21:02.016 Power Cycles: 0 00:21:02.016 Power On Hours: 0 hours 00:21:02.016 Unsafe Shutdowns: 0 00:21:02.016 Unrecoverable Media Errors: 0 00:21:02.016 Lifetime Error Log Entries: 0 00:21:02.016 Warning Temperature Time: 0 minutes 00:21:02.016 Critical Temperature Time: 0 minutes 00:21:02.016 00:21:02.016 Number of Queues 00:21:02.016 ================ 00:21:02.016 Number of I/O Submission Queues: 64 00:21:02.016 Number of I/O Completion Queues: 64 00:21:02.016 00:21:02.016 ZNS Specific Controller Data 00:21:02.016 ============================ 00:21:02.016 Zone Append Size Limit: 0 00:21:02.016 00:21:02.016 00:21:02.016 Active Namespaces 00:21:02.016 ================= 00:21:02.016 Namespace ID:1 00:21:02.016 Error Recovery Timeout: Unlimited 00:21:02.016 Command Set Identifier: NVM (00h) 00:21:02.016 Deallocate: Supported 00:21:02.016 Deallocated/Unwritten Error: Supported 00:21:02.016 Deallocated Read Value: All 0x00 00:21:02.016 Deallocate in Write Zeroes: Not Supported 00:21:02.016 Deallocated Guard Field: 0xFFFF 00:21:02.016 Flush: Supported 00:21:02.016 Reservation: Not Supported 00:21:02.016 Namespace Sharing Capabilities: Private 00:21:02.016 Size (in LBAs): 1310720 (5GiB) 00:21:02.016 Capacity (in LBAs): 1310720 (5GiB) 00:21:02.016 Utilization (in LBAs): 1310720 (5GiB) 00:21:02.016 Thin Provisioning: Not Supported 00:21:02.016 Per-NS Atomic Units: No 00:21:02.016 Maximum Single Source Range Length: 128 00:21:02.016 Maximum Copy Length: 128 00:21:02.016 Maximum Source Range Count: 128 00:21:02.016 NGUID/EUI64 Never Reused: No 00:21:02.016 Namespace Write Protected: No 00:21:02.016 Number of LBA Formats: 8 00:21:02.017 Current LBA Format: LBA Format #04 00:21:02.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:02.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:02.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:02.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:02.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:02.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:02.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:02.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:02.017 00:21:02.017 NVM Specific Namespace Data 00:21:02.017 =========================== 00:21:02.017 Logical Block Storage Tag Mask: 0 00:21:02.017 Protection Information Capabilities: 00:21:02.017 16b Guard Protection Information Storage Tag Support: No 00:21:02.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:02.017 Storage Tag Check Read Support: No 00:21:02.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.017 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:02.017 07:31:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:02.277 ===================================================== 00:21:02.277 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:02.277 ===================================================== 00:21:02.277 Controller Capabilities/Features 00:21:02.277 ================================ 00:21:02.277 Vendor ID: 1b36 00:21:02.277 Subsystem Vendor ID: 1af4 00:21:02.277 Serial Number: 12340 00:21:02.277 Model Number: QEMU NVMe Ctrl 00:21:02.277 Firmware Version: 8.0.0 00:21:02.277 Recommended Arb Burst: 6 00:21:02.277 IEEE OUI Identifier: 00 54 52 00:21:02.277 Multi-path I/O 00:21:02.277 May have multiple subsystem ports: No 00:21:02.277 May have multiple controllers: No 00:21:02.277 Associated with SR-IOV VF: No 00:21:02.277 Max Data Transfer Size: 524288 00:21:02.277 Max Number of Namespaces: 256 00:21:02.277 Max Number of I/O Queues: 64 00:21:02.277 NVMe Specification Version (VS): 1.4 00:21:02.277 NVMe Specification Version (Identify): 1.4 00:21:02.277 Maximum Queue Entries: 2048 00:21:02.277 Contiguous Queues Required: Yes 00:21:02.277 Arbitration Mechanisms Supported 00:21:02.277 Weighted Round Robin: Not Supported 00:21:02.277 Vendor Specific: Not Supported 00:21:02.277 Reset Timeout: 7500 ms 00:21:02.277 Doorbell Stride: 4 bytes 00:21:02.277 NVM Subsystem Reset: Not Supported 00:21:02.277 Command Sets Supported 00:21:02.277 NVM Command Set: Supported 00:21:02.277 Boot Partition: Not Supported 00:21:02.277 Memory Page Size Minimum: 4096 bytes 00:21:02.277 Memory Page Size Maximum: 65536 bytes 00:21:02.277 Persistent Memory Region: Not Supported 00:21:02.277 Optional Asynchronous Events Supported 00:21:02.277 Namespace Attribute Notices: Supported 00:21:02.277 Firmware Activation Notices: Not Supported 00:21:02.277 ANA Change Notices: Not Supported 00:21:02.277 PLE Aggregate Log Change Notices: Not Supported 00:21:02.277 LBA Status Info Alert Notices: Not Supported 00:21:02.277 EGE Aggregate Log Change Notices: Not Supported 00:21:02.277 Normal NVM Subsystem Shutdown event: Not Supported 00:21:02.277 Zone Descriptor Change Notices: Not Supported 00:21:02.277 Discovery Log Change Notices: Not Supported 00:21:02.277 Controller Attributes 00:21:02.277 128-bit Host Identifier: Not Supported 00:21:02.277 Non-Operational Permissive Mode: Not Supported 00:21:02.277 NVM Sets: Not Supported 00:21:02.277 Read Recovery Levels: Not Supported 00:21:02.277 Endurance Groups: Not Supported 00:21:02.277 Predictable Latency Mode: Not Supported 00:21:02.277 Traffic Based Keep ALive: Not Supported 00:21:02.277 Namespace Granularity: Not Supported 00:21:02.277 SQ Associations: Not Supported 00:21:02.277 UUID List: Not Supported 00:21:02.277 Multi-Domain Subsystem: Not Supported 00:21:02.277 Fixed Capacity Management: Not Supported 00:21:02.277 Variable Capacity Management: Not Supported 00:21:02.277 Delete Endurance Group: Not Supported 00:21:02.277 Delete NVM Set: Not Supported 00:21:02.277 Extended LBA Formats Supported: Supported 00:21:02.277 Flexible Data Placement Supported: Not Supported 00:21:02.277 00:21:02.277 Controller Memory Buffer Support 00:21:02.277 ================================ 00:21:02.277 Supported: No 00:21:02.277 00:21:02.277 Persistent Memory Region Support 00:21:02.277 ================================ 00:21:02.277 Supported: No 00:21:02.277 00:21:02.277 Admin Command Set Attributes 00:21:02.277 ============================ 00:21:02.277 Security Send/Receive: Not Supported 00:21:02.277 Format NVM: Supported 00:21:02.277 Firmware Activate/Download: Not Supported 00:21:02.277 Namespace Management: Supported 00:21:02.277 Device Self-Test: Not Supported 00:21:02.277 Directives: Supported 00:21:02.277 NVMe-MI: Not Supported 00:21:02.277 Virtualization Management: Not Supported 00:21:02.277 Doorbell Buffer Config: Supported 00:21:02.278 Get LBA Status Capability: Not Supported 00:21:02.278 Command & Feature Lockdown Capability: Not Supported 00:21:02.278 Abort Command Limit: 4 00:21:02.278 Async Event Request Limit: 4 00:21:02.278 Number of Firmware Slots: N/A 00:21:02.278 Firmware Slot 1 Read-Only: N/A 00:21:02.278 Firmware Activation Without Reset: N/A 00:21:02.278 Multiple Update Detection Support: N/A 00:21:02.278 Firmware Update Granularity: No Information Provided 00:21:02.278 Per-Namespace SMART Log: Yes 00:21:02.278 Asymmetric Namespace Access Log Page: Not Supported 00:21:02.278 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:02.278 Command Effects Log Page: Supported 00:21:02.278 Get Log Page Extended Data: Supported 00:21:02.278 Telemetry Log Pages: Not Supported 00:21:02.278 Persistent Event Log Pages: Not Supported 00:21:02.278 Supported Log Pages Log Page: May Support 00:21:02.278 Commands Supported & Effects Log Page: Not Supported 00:21:02.278 Feature Identifiers & Effects Log Page:May Support 00:21:02.278 NVMe-MI Commands & Effects Log Page: May Support 00:21:02.278 Data Area 4 for Telemetry Log: Not Supported 00:21:02.278 Error Log Page Entries Supported: 1 00:21:02.278 Keep Alive: Not Supported 00:21:02.278 00:21:02.278 NVM Command Set Attributes 00:21:02.278 ========================== 00:21:02.278 Submission Queue Entry Size 00:21:02.278 Max: 64 00:21:02.278 Min: 64 00:21:02.278 Completion Queue Entry Size 00:21:02.278 Max: 16 00:21:02.278 Min: 16 00:21:02.278 Number of Namespaces: 256 00:21:02.278 Compare Command: Supported 00:21:02.278 Write Uncorrectable Command: Not Supported 00:21:02.278 Dataset Management Command: Supported 00:21:02.278 Write Zeroes Command: Supported 00:21:02.278 Set Features Save Field: Supported 00:21:02.278 Reservations: Not Supported 00:21:02.278 Timestamp: Supported 00:21:02.278 Copy: Supported 00:21:02.278 Volatile Write Cache: Present 00:21:02.278 Atomic Write Unit (Normal): 1 00:21:02.278 Atomic Write Unit (PFail): 1 00:21:02.278 Atomic Compare & Write Unit: 1 00:21:02.278 Fused Compare & Write: Not Supported 00:21:02.278 Scatter-Gather List 00:21:02.278 SGL Command Set: Supported 00:21:02.278 SGL Keyed: Not Supported 00:21:02.278 SGL Bit Bucket Descriptor: Not Supported 00:21:02.278 SGL Metadata Pointer: Not Supported 00:21:02.278 Oversized SGL: Not Supported 00:21:02.278 SGL Metadata Address: Not Supported 00:21:02.278 SGL Offset: Not Supported 00:21:02.278 Transport SGL Data Block: Not Supported 00:21:02.278 Replay Protected Memory Block: Not Supported 00:21:02.278 00:21:02.278 Firmware Slot Information 00:21:02.278 ========================= 00:21:02.278 Active slot: 1 00:21:02.278 Slot 1 Firmware Revision: 1.0 00:21:02.278 00:21:02.278 00:21:02.278 Commands Supported and Effects 00:21:02.278 ============================== 00:21:02.278 Admin Commands 00:21:02.278 -------------- 00:21:02.278 Delete I/O Submission Queue (00h): Supported 00:21:02.278 Create I/O Submission Queue (01h): Supported 00:21:02.278 Get Log Page (02h): Supported 00:21:02.278 Delete I/O Completion Queue (04h): Supported 00:21:02.278 Create I/O Completion Queue (05h): Supported 00:21:02.278 Identify (06h): Supported 00:21:02.278 Abort (08h): Supported 00:21:02.278 Set Features (09h): Supported 00:21:02.278 Get Features (0Ah): Supported 00:21:02.278 Asynchronous Event Request (0Ch): Supported 00:21:02.278 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:02.278 Directive Send (19h): Supported 00:21:02.278 Directive Receive (1Ah): Supported 00:21:02.278 Virtualization Management (1Ch): Supported 00:21:02.278 Doorbell Buffer Config (7Ch): Supported 00:21:02.278 Format NVM (80h): Supported LBA-Change 00:21:02.278 I/O Commands 00:21:02.278 ------------ 00:21:02.278 Flush (00h): Supported LBA-Change 00:21:02.278 Write (01h): Supported LBA-Change 00:21:02.278 Read (02h): Supported 00:21:02.278 Compare (05h): Supported 00:21:02.278 Write Zeroes (08h): Supported LBA-Change 00:21:02.278 Dataset Management (09h): Supported LBA-Change 00:21:02.278 Unknown (0Ch): Supported 00:21:02.278 Unknown (12h): Supported 00:21:02.278 Copy (19h): Supported LBA-Change 00:21:02.278 Unknown (1Dh): Supported LBA-Change 00:21:02.278 00:21:02.278 Error Log 00:21:02.278 ========= 00:21:02.278 00:21:02.278 Arbitration 00:21:02.278 =========== 00:21:02.278 Arbitration Burst: no limit 00:21:02.278 00:21:02.278 Power Management 00:21:02.278 ================ 00:21:02.278 Number of Power States: 1 00:21:02.278 Current Power State: Power State #0 00:21:02.278 Power State #0: 00:21:02.278 Max Power: 25.00 W 00:21:02.278 Non-Operational State: Operational 00:21:02.278 Entry Latency: 16 microseconds 00:21:02.278 Exit Latency: 4 microseconds 00:21:02.278 Relative Read Throughput: 0 00:21:02.278 Relative Read Latency: 0 00:21:02.278 Relative Write Throughput: 0 00:21:02.278 Relative Write Latency: 0 00:21:02.278 Idle Power: Not Reported 00:21:02.278 Active Power: Not Reported 00:21:02.278 Non-Operational Permissive Mode: Not Supported 00:21:02.278 00:21:02.278 Health Information 00:21:02.278 ================== 00:21:02.278 Critical Warnings: 00:21:02.278 Available Spare Space: OK 00:21:02.278 Temperature: OK 00:21:02.278 Device Reliability: OK 00:21:02.278 Read Only: No 00:21:02.278 Volatile Memory Backup: OK 00:21:02.278 Current Temperature: 323 Kelvin (50 Celsius) 00:21:02.278 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:02.278 Available Spare: 0% 00:21:02.278 Available Spare Threshold: 0% 00:21:02.278 Life Percentage Used: 0% 00:21:02.278 Data Units Read: 4635 00:21:02.278 Data Units Written: 4365 00:21:02.278 Host Read Commands: 217038 00:21:02.278 Host Write Commands: 231470 00:21:02.278 Controller Busy Time: 0 minutes 00:21:02.278 Power Cycles: 0 00:21:02.278 Power On Hours: 0 hours 00:21:02.278 Unsafe Shutdowns: 0 00:21:02.278 Unrecoverable Media Errors: 0 00:21:02.278 Lifetime Error Log Entries: 0 00:21:02.278 Warning Temperature Time: 0 minutes 00:21:02.278 Critical Temperature Time: 0 minutes 00:21:02.278 00:21:02.278 Number of Queues 00:21:02.278 ================ 00:21:02.278 Number of I/O Submission Queues: 64 00:21:02.278 Number of I/O Completion Queues: 64 00:21:02.278 00:21:02.278 ZNS Specific Controller Data 00:21:02.278 ============================ 00:21:02.278 Zone Append Size Limit: 0 00:21:02.278 00:21:02.278 00:21:02.278 Active Namespaces 00:21:02.278 ================= 00:21:02.278 Namespace ID:1 00:21:02.278 Error Recovery Timeout: Unlimited 00:21:02.278 Command Set Identifier: NVM (00h) 00:21:02.278 Deallocate: Supported 00:21:02.278 Deallocated/Unwritten Error: Supported 00:21:02.278 Deallocated Read Value: All 0x00 00:21:02.278 Deallocate in Write Zeroes: Not Supported 00:21:02.278 Deallocated Guard Field: 0xFFFF 00:21:02.278 Flush: Supported 00:21:02.278 Reservation: Not Supported 00:21:02.278 Namespace Sharing Capabilities: Private 00:21:02.278 Size (in LBAs): 1310720 (5GiB) 00:21:02.278 Capacity (in LBAs): 1310720 (5GiB) 00:21:02.278 Utilization (in LBAs): 1310720 (5GiB) 00:21:02.278 Thin Provisioning: Not Supported 00:21:02.278 Per-NS Atomic Units: No 00:21:02.278 Maximum Single Source Range Length: 128 00:21:02.278 Maximum Copy Length: 128 00:21:02.278 Maximum Source Range Count: 128 00:21:02.278 NGUID/EUI64 Never Reused: No 00:21:02.278 Namespace Write Protected: No 00:21:02.278 Number of LBA Formats: 8 00:21:02.278 Current LBA Format: LBA Format #04 00:21:02.278 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:02.278 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:02.278 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:02.278 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:02.278 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:02.278 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:02.278 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:02.278 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:02.278 00:21:02.278 NVM Specific Namespace Data 00:21:02.278 =========================== 00:21:02.278 Logical Block Storage Tag Mask: 0 00:21:02.278 Protection Information Capabilities: 00:21:02.278 16b Guard Protection Information Storage Tag Support: No 00:21:02.278 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:02.278 Storage Tag Check Read Support: No 00:21:02.278 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.278 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.278 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.278 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.279 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.279 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.279 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.279 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:02.279 00:21:02.279 real 0m0.643s 00:21:02.279 user 0m0.245s 00:21:02.279 sys 0m0.331s 00:21:02.279 07:31:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.279 07:31:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.279 ************************************ 00:21:02.279 END TEST nvme_identify 00:21:02.279 ************************************ 00:21:02.279 07:31:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:21:02.279 07:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.279 07:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.279 07:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:02.279 ************************************ 00:21:02.279 START TEST nvme_perf 00:21:02.279 ************************************ 00:21:02.279 07:31:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:21:02.279 07:31:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:21:03.676 Initializing NVMe Controllers 00:21:03.676 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:03.676 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:03.676 Initialization complete. Launching workers. 00:21:03.676 ======================================================== 00:21:03.676 Latency(us) 00:21:03.676 Device Information : IOPS MiB/s Average min max 00:21:03.676 PCIE (0000:00:10.0) NSID 1 from core 0: 107389.60 1258.47 1191.17 551.34 5847.97 00:21:03.676 ======================================================== 00:21:03.677 Total : 107389.60 1258.47 1191.17 551.34 5847.97 00:21:03.677 00:21:03.677 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:03.677 ================================================================================= 00:21:03.677 1.00000% : 676.108us 00:21:03.677 10.00000% : 772.695us 00:21:03.677 25.00000% : 905.055us 00:21:03.677 50.00000% : 1151.888us 00:21:03.677 75.00000% : 1387.990us 00:21:03.677 90.00000% : 1538.236us 00:21:03.677 95.00000% : 1731.410us 00:21:03.677 98.00000% : 2518.414us 00:21:03.677 99.00000% : 2861.834us 00:21:03.677 99.50000% : 3133.708us 00:21:03.677 99.90000% : 3977.949us 00:21:03.677 99.99000% : 5551.958us 00:21:03.677 99.99900% : 5809.523us 00:21:03.677 99.99990% : 5866.760us 00:21:03.677 99.99999% : 5866.760us 00:21:03.677 00:21:03.677 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:03.677 ============================================================================== 00:21:03.677 Range in us Cumulative IO count 00:21:03.677 550.903 - 554.480: 0.0009% ( 1) 00:21:03.677 558.058 - 561.635: 0.0028% ( 2) 00:21:03.677 565.212 - 568.790: 0.0037% ( 1) 00:21:03.677 572.367 - 575.944: 0.0047% ( 1) 00:21:03.677 575.944 - 579.521: 0.0065% ( 2) 00:21:03.677 579.521 - 583.099: 0.0074% ( 1) 00:21:03.677 583.099 - 586.676: 0.0112% ( 4) 00:21:03.677 586.676 - 590.253: 0.0121% ( 1) 00:21:03.677 590.253 - 593.831: 0.0158% ( 4) 00:21:03.677 593.831 - 597.408: 0.0233% ( 8) 00:21:03.677 597.408 - 600.985: 0.0270% ( 4) 00:21:03.677 600.985 - 604.562: 0.0326% ( 6) 00:21:03.677 604.562 - 608.140: 0.0437% ( 12) 00:21:03.677 608.140 - 611.717: 0.0568% ( 14) 00:21:03.677 611.717 - 615.294: 0.0689% ( 13) 00:21:03.677 615.294 - 618.872: 0.0810% ( 13) 00:21:03.677 618.872 - 622.449: 0.0959% ( 16) 00:21:03.678 622.449 - 626.026: 0.1117% ( 17) 00:21:03.678 626.026 - 629.603: 0.1415% ( 32) 00:21:03.678 629.603 - 633.181: 0.1657% ( 26) 00:21:03.678 633.181 - 636.758: 0.2122% ( 50) 00:21:03.678 636.758 - 640.335: 0.2513% ( 42) 00:21:03.678 640.335 - 643.913: 0.3155% ( 69) 00:21:03.678 643.913 - 647.490: 0.3639% ( 52) 00:21:03.678 647.490 - 651.067: 0.4486% ( 91) 00:21:03.678 651.067 - 654.645: 0.5221% ( 79) 00:21:03.678 654.645 - 658.222: 0.6096% ( 94) 00:21:03.678 658.222 - 661.799: 0.6905% ( 87) 00:21:03.678 661.799 - 665.376: 0.7827% ( 99) 00:21:03.678 665.376 - 668.954: 0.8813% ( 106) 00:21:03.678 668.954 - 672.531: 0.9930% ( 120) 00:21:03.678 672.531 - 676.108: 1.1093% ( 125) 00:21:03.678 676.108 - 679.686: 1.2740% ( 177) 00:21:03.678 679.686 - 683.263: 1.4359% ( 174) 00:21:03.678 683.263 - 686.840: 1.5876% ( 163) 00:21:03.678 686.840 - 690.417: 1.7812% ( 208) 00:21:03.678 690.417 - 693.995: 1.9692% ( 202) 00:21:03.678 693.995 - 697.572: 2.1898% ( 237) 00:21:03.678 697.572 - 701.149: 2.4196% ( 247) 00:21:03.678 701.149 - 704.727: 2.6616% ( 260) 00:21:03.678 704.727 - 708.304: 2.9119% ( 269) 00:21:03.678 708.304 - 711.881: 3.2134% ( 324) 00:21:03.678 711.881 - 715.459: 3.4973% ( 305) 00:21:03.678 715.459 - 719.036: 3.7830% ( 307) 00:21:03.679 719.036 - 722.613: 4.1068% ( 348) 00:21:03.679 722.613 - 726.190: 4.4428% ( 361) 00:21:03.679 726.190 - 729.768: 4.7722% ( 354) 00:21:03.679 729.768 - 733.345: 5.1435% ( 399) 00:21:03.679 733.345 - 736.922: 5.5493% ( 436) 00:21:03.679 736.922 - 740.500: 5.9448% ( 425) 00:21:03.679 740.500 - 744.077: 6.3301% ( 414) 00:21:03.679 744.077 - 747.654: 6.7489% ( 450) 00:21:03.679 747.654 - 751.231: 7.1844% ( 468) 00:21:03.679 751.231 - 754.809: 7.6144% ( 462) 00:21:03.679 754.809 - 758.386: 8.0955% ( 517) 00:21:03.679 758.386 - 761.963: 8.5766% ( 517) 00:21:03.679 761.963 - 765.541: 9.0698% ( 530) 00:21:03.679 765.541 - 769.118: 9.5296% ( 494) 00:21:03.679 769.118 - 772.695: 10.0302% ( 538) 00:21:03.679 772.695 - 776.272: 10.5132% ( 519) 00:21:03.679 776.272 - 779.850: 11.0307% ( 556) 00:21:03.679 779.850 - 783.427: 11.5127% ( 518) 00:21:03.679 783.427 - 787.004: 12.0143% ( 539) 00:21:03.679 787.004 - 790.582: 12.4666% ( 486) 00:21:03.679 790.582 - 794.159: 12.9645% ( 535) 00:21:03.679 794.159 - 797.736: 13.4419% ( 513) 00:21:03.679 797.736 - 801.314: 13.8858% ( 477) 00:21:03.679 801.314 - 804.891: 14.3437% ( 492) 00:21:03.679 804.891 - 808.468: 14.8220% ( 514) 00:21:03.679 808.468 - 812.045: 15.2948% ( 508) 00:21:03.679 812.045 - 815.623: 15.7517% ( 491) 00:21:03.679 815.623 - 819.200: 16.2049% ( 487) 00:21:03.679 819.200 - 822.777: 16.6246% ( 451) 00:21:03.679 822.777 - 826.355: 17.0443% ( 451) 00:21:03.679 826.355 - 829.932: 17.4789% ( 467) 00:21:03.679 829.932 - 833.509: 17.9033% ( 456) 00:21:03.679 833.509 - 837.086: 18.3230% ( 451) 00:21:03.679 837.086 - 840.664: 18.7018% ( 407) 00:21:03.680 840.664 - 844.241: 19.1047% ( 433) 00:21:03.680 844.241 - 847.818: 19.5226% ( 449) 00:21:03.680 847.818 - 851.396: 19.9302% ( 438) 00:21:03.680 851.396 - 854.973: 20.3136% ( 412) 00:21:03.680 854.973 - 858.550: 20.6933% ( 408) 00:21:03.680 858.550 - 862.128: 21.0860% ( 422) 00:21:03.680 862.128 - 865.705: 21.4760% ( 419) 00:21:03.680 865.705 - 869.282: 21.8650% ( 418) 00:21:03.680 869.282 - 872.859: 22.2186% ( 380) 00:21:03.680 872.859 - 876.437: 22.5760% ( 384) 00:21:03.680 876.437 - 880.014: 22.9259% ( 376) 00:21:03.680 880.014 - 883.591: 23.2870% ( 388) 00:21:03.680 883.591 - 887.169: 23.6294% ( 368) 00:21:03.680 887.169 - 890.746: 23.9542% ( 349) 00:21:03.680 890.746 - 894.323: 24.3078% ( 380) 00:21:03.680 894.323 - 897.900: 24.6457% ( 363) 00:21:03.680 897.900 - 901.478: 24.9826% ( 362) 00:21:03.680 901.478 - 905.055: 25.3120% ( 354) 00:21:03.680 905.055 - 908.632: 25.6731% ( 388) 00:21:03.680 908.632 - 912.210: 25.9969% ( 348) 00:21:03.680 912.210 - 915.787: 26.3171% ( 344) 00:21:03.680 915.787 - 922.941: 26.9797% ( 712) 00:21:03.680 922.941 - 930.096: 27.6609% ( 732) 00:21:03.680 930.096 - 937.251: 28.3188% ( 707) 00:21:03.680 937.251 - 944.405: 29.0112% ( 744) 00:21:03.680 944.405 - 951.560: 29.7157% ( 757) 00:21:03.680 951.560 - 958.714: 30.4202% ( 757) 00:21:03.680 958.714 - 965.869: 31.1247% ( 757) 00:21:03.680 965.869 - 973.024: 31.8263% ( 754) 00:21:03.680 973.024 - 980.178: 32.5318% ( 758) 00:21:03.680 980.178 - 987.333: 33.2232% ( 743) 00:21:03.680 987.333 - 994.487: 33.9463% ( 777) 00:21:03.680 994.487 - 1001.642: 34.6675% ( 775) 00:21:03.680 1001.642 - 1008.797: 35.3767% ( 762) 00:21:03.680 1008.797 - 1015.951: 36.0942% ( 771) 00:21:03.680 1015.951 - 1023.106: 36.8247% ( 785) 00:21:03.680 1023.106 - 1030.260: 37.5432% ( 772) 00:21:03.681 1030.260 - 1037.415: 38.2588% ( 769) 00:21:03.681 1037.415 - 1044.569: 38.9847% ( 780) 00:21:03.681 1044.569 - 1051.724: 39.7069% ( 776) 00:21:03.681 1051.724 - 1058.879: 40.4411% ( 789) 00:21:03.681 1058.879 - 1066.033: 41.2061% ( 822) 00:21:03.681 1066.033 - 1073.188: 41.9282% ( 776) 00:21:03.681 1073.188 - 1080.342: 42.6988% ( 828) 00:21:03.681 1080.342 - 1087.497: 43.4377% ( 794) 00:21:03.681 1087.497 - 1094.652: 44.1934% ( 812) 00:21:03.681 1094.652 - 1101.806: 44.9388% ( 801) 00:21:03.681 1101.806 - 1108.961: 45.7168% ( 836) 00:21:03.681 1108.961 - 1116.115: 46.4436% ( 781) 00:21:03.681 1116.115 - 1123.270: 47.2123% ( 826) 00:21:03.681 1123.270 - 1130.424: 47.9568% ( 800) 00:21:03.681 1130.424 - 1137.579: 48.7339% ( 835) 00:21:03.681 1137.579 - 1144.734: 49.4886% ( 811) 00:21:03.681 1144.734 - 1151.888: 50.2685% ( 838) 00:21:03.681 1151.888 - 1159.043: 51.0204% ( 808) 00:21:03.681 1159.043 - 1166.197: 51.7938% ( 831) 00:21:03.681 1166.197 - 1173.352: 52.5485% ( 811) 00:21:03.681 1173.352 - 1180.507: 53.3107% ( 819) 00:21:03.681 1180.507 - 1187.661: 54.0868% ( 834) 00:21:03.681 1187.661 - 1194.816: 54.8416% ( 811) 00:21:03.681 1194.816 - 1201.970: 55.6056% ( 821) 00:21:03.681 1201.970 - 1209.125: 56.3687% ( 820) 00:21:03.681 1209.125 - 1216.279: 57.1253% ( 813) 00:21:03.681 1216.279 - 1223.434: 57.8828% ( 814) 00:21:03.681 1223.434 - 1230.589: 58.6701% ( 846) 00:21:03.681 1230.589 - 1237.743: 59.3988% ( 783) 00:21:03.681 1237.743 - 1244.898: 60.1768% ( 836) 00:21:03.681 1244.898 - 1252.052: 60.9157% ( 794) 00:21:03.681 1252.052 - 1259.207: 61.7040% ( 847) 00:21:03.681 1259.207 - 1266.362: 62.4047% ( 753) 00:21:03.681 1266.362 - 1273.516: 63.2106% ( 866) 00:21:03.681 1273.516 - 1280.671: 63.9403% ( 784) 00:21:03.681 1280.671 - 1287.825: 64.7285% ( 847) 00:21:03.681 1287.825 - 1294.980: 65.4795% ( 807) 00:21:03.681 1294.980 - 1302.134: 66.2333% ( 810) 00:21:03.681 1302.134 - 1309.289: 66.9862% ( 809) 00:21:03.681 1309.289 - 1316.444: 67.7381% ( 808) 00:21:03.681 1316.444 - 1323.598: 68.4947% ( 813) 00:21:03.681 1323.598 - 1330.753: 69.2681% ( 831) 00:21:03.681 1330.753 - 1337.907: 70.0209% ( 809) 00:21:03.681 1337.907 - 1345.062: 70.7775% ( 813) 00:21:03.681 1345.062 - 1352.217: 71.5416% ( 821) 00:21:03.681 1352.217 - 1359.371: 72.2898% ( 804) 00:21:03.681 1359.371 - 1366.526: 73.0483% ( 815) 00:21:03.681 1366.526 - 1373.680: 73.7928% ( 800) 00:21:03.681 1373.680 - 1380.835: 74.5493% ( 813) 00:21:03.681 1380.835 - 1387.990: 75.2883% ( 794) 00:21:03.681 1387.990 - 1395.144: 76.0560% ( 825) 00:21:03.681 1395.144 - 1402.299: 76.7884% ( 787) 00:21:03.681 1402.299 - 1409.453: 77.5636% ( 833) 00:21:03.681 1409.453 - 1416.608: 78.3156% ( 808) 00:21:03.681 1416.608 - 1423.762: 79.0433% ( 782) 00:21:03.681 1423.762 - 1430.917: 79.8204% ( 835) 00:21:03.681 1430.917 - 1438.072: 80.5556% ( 790) 00:21:03.681 1438.072 - 1445.226: 81.3075% ( 808) 00:21:03.681 1445.226 - 1452.381: 82.0250% ( 771) 00:21:03.681 1452.381 - 1459.535: 82.7872% ( 819) 00:21:03.681 1459.535 - 1466.690: 83.4945% ( 760) 00:21:03.681 1466.690 - 1473.845: 84.2697% ( 833) 00:21:03.681 1473.845 - 1480.999: 84.9705% ( 753) 00:21:03.681 1480.999 - 1488.154: 85.7336% ( 820) 00:21:03.681 1488.154 - 1495.308: 86.4250% ( 743) 00:21:03.681 1495.308 - 1502.463: 87.1872% ( 819) 00:21:03.681 1502.463 - 1509.617: 87.8545% ( 717) 00:21:03.681 1509.617 - 1516.772: 88.5831% ( 783) 00:21:03.681 1516.772 - 1523.927: 89.2364% ( 702) 00:21:03.681 1523.927 - 1531.081: 89.9139% ( 728) 00:21:03.681 1531.081 - 1538.236: 90.5132% ( 644) 00:21:03.681 1538.236 - 1545.390: 91.1219% ( 654) 00:21:03.681 1545.390 - 1552.545: 91.6458% ( 563) 00:21:03.681 1552.545 - 1559.700: 92.1511% ( 543) 00:21:03.681 1559.700 - 1566.854: 92.5885% ( 470) 00:21:03.681 1566.854 - 1574.009: 92.9682% ( 408) 00:21:03.681 1574.009 - 1581.163: 93.3023% ( 359) 00:21:03.681 1581.163 - 1588.318: 93.5768% ( 295) 00:21:03.681 1588.318 - 1595.472: 93.7890% ( 228) 00:21:03.681 1595.472 - 1602.627: 93.9603% ( 184) 00:21:03.681 1602.627 - 1609.782: 94.0933% ( 143) 00:21:03.681 1609.782 - 1616.936: 94.2143% ( 130) 00:21:03.681 1616.936 - 1624.091: 94.3083% ( 101) 00:21:03.681 1624.091 - 1631.245: 94.3846% ( 82) 00:21:03.681 1631.245 - 1638.400: 94.4526% ( 73) 00:21:03.681 1638.400 - 1645.555: 94.5010% ( 52) 00:21:03.681 1645.555 - 1652.709: 94.5549% ( 58) 00:21:03.681 1652.709 - 1659.864: 94.6024% ( 51) 00:21:03.681 1659.864 - 1667.018: 94.6443% ( 45) 00:21:03.681 1667.018 - 1674.173: 94.6936% ( 53) 00:21:03.681 1674.173 - 1681.328: 94.7345% ( 44) 00:21:03.681 1681.328 - 1688.482: 94.7718% ( 40) 00:21:03.681 1688.482 - 1695.637: 94.8211% ( 53) 00:21:03.681 1695.637 - 1702.791: 94.8565% ( 38) 00:21:03.681 1702.791 - 1709.946: 94.8993% ( 46) 00:21:03.681 1709.946 - 1717.100: 94.9356% ( 39) 00:21:03.681 1717.100 - 1724.255: 94.9802% ( 48) 00:21:03.681 1724.255 - 1731.410: 95.0137% ( 36) 00:21:03.681 1731.410 - 1738.564: 95.0603% ( 50) 00:21:03.681 1738.564 - 1745.719: 95.1040% ( 47) 00:21:03.681 1745.719 - 1752.873: 95.1412% ( 40) 00:21:03.681 1752.873 - 1760.028: 95.1812% ( 43) 00:21:03.681 1760.028 - 1767.183: 95.2203% ( 42) 00:21:03.681 1767.183 - 1774.337: 95.2538% ( 36) 00:21:03.681 1774.337 - 1781.492: 95.2957% ( 45) 00:21:03.681 1781.492 - 1788.646: 95.3348% ( 42) 00:21:03.681 1788.646 - 1795.801: 95.3748% ( 43) 00:21:03.681 1795.801 - 1802.955: 95.4083% ( 36) 00:21:03.681 1802.955 - 1810.110: 95.4493% ( 44) 00:21:03.681 1810.110 - 1817.265: 95.4809% ( 34) 00:21:03.681 1817.265 - 1824.419: 95.5172% ( 39) 00:21:03.681 1824.419 - 1831.574: 95.5535% ( 39) 00:21:03.681 1831.574 - 1845.883: 95.6177% ( 69) 00:21:03.681 1845.883 - 1860.192: 95.6940% ( 82) 00:21:03.681 1860.192 - 1874.501: 95.7582% ( 69) 00:21:03.681 1874.501 - 1888.810: 95.8206% ( 67) 00:21:03.681 1888.810 - 1903.120: 95.8867% ( 71) 00:21:03.681 1903.120 - 1917.429: 95.9471% ( 65) 00:21:03.681 1917.429 - 1931.738: 96.0076% ( 65) 00:21:03.681 1931.738 - 1946.047: 96.0709% ( 68) 00:21:03.681 1946.047 - 1960.356: 96.1305% ( 64) 00:21:03.681 1960.356 - 1974.666: 96.1826% ( 56) 00:21:03.681 1974.666 - 1988.975: 96.2412% ( 63) 00:21:03.681 1988.975 - 2003.284: 96.2952% ( 58) 00:21:03.681 2003.284 - 2017.593: 96.3473% ( 56) 00:21:03.681 2017.593 - 2031.902: 96.3994% ( 56) 00:21:03.681 2031.902 - 2046.211: 96.4506% ( 55) 00:21:03.681 2046.211 - 2060.521: 96.4971% ( 50) 00:21:03.681 2060.521 - 2074.830: 96.5502% ( 57) 00:21:03.681 2074.830 - 2089.139: 96.6004% ( 54) 00:21:03.681 2089.139 - 2103.448: 96.6470% ( 50) 00:21:03.681 2103.448 - 2117.757: 96.6982% ( 55) 00:21:03.681 2117.757 - 2132.066: 96.7465% ( 52) 00:21:03.681 2132.066 - 2146.376: 96.7996% ( 57) 00:21:03.681 2146.376 - 2160.685: 96.8564% ( 61) 00:21:03.681 2160.685 - 2174.994: 96.9010% ( 48) 00:21:03.681 2174.994 - 2189.303: 96.9522% ( 55) 00:21:03.681 2189.303 - 2203.612: 97.0090% ( 61) 00:21:03.681 2203.612 - 2217.921: 97.0574% ( 52) 00:21:03.681 2217.921 - 2232.231: 97.1160% ( 63) 00:21:03.681 2232.231 - 2246.540: 97.1709% ( 59) 00:21:03.681 2246.540 - 2260.849: 97.2184% ( 51) 00:21:03.681 2260.849 - 2275.158: 97.2742% ( 60) 00:21:03.681 2275.158 - 2289.467: 97.3170% ( 46) 00:21:03.681 2289.467 - 2303.776: 97.3617% ( 48) 00:21:03.681 2303.776 - 2318.086: 97.4110% ( 53) 00:21:03.682 2318.086 - 2332.395: 97.4566% ( 49) 00:21:03.682 2332.395 - 2346.704: 97.5041% ( 51) 00:21:03.682 2346.704 - 2361.013: 97.5515% ( 51) 00:21:03.682 2361.013 - 2375.322: 97.5971% ( 49) 00:21:03.682 2375.322 - 2389.631: 97.6381% ( 44) 00:21:03.682 2389.631 - 2403.941: 97.6809% ( 46) 00:21:03.682 2403.941 - 2418.250: 97.7274% ( 50) 00:21:03.682 2418.250 - 2432.559: 97.7693% ( 45) 00:21:03.682 2432.559 - 2446.868: 97.8168% ( 51) 00:21:03.682 2446.868 - 2461.177: 97.8586% ( 45) 00:21:03.682 2461.177 - 2475.486: 97.9042% ( 49) 00:21:03.682 2475.486 - 2489.796: 97.9508% ( 50) 00:21:03.682 2489.796 - 2504.105: 97.9992% ( 52) 00:21:03.682 2504.105 - 2518.414: 98.0438% ( 48) 00:21:03.682 2518.414 - 2532.723: 98.0866% ( 46) 00:21:03.682 2532.723 - 2547.032: 98.1332% ( 50) 00:21:03.682 2547.032 - 2561.341: 98.1760% ( 46) 00:21:03.682 2561.341 - 2575.651: 98.2188% ( 46) 00:21:03.682 2575.651 - 2589.960: 98.2653% ( 50) 00:21:03.682 2589.960 - 2604.269: 98.3100% ( 48) 00:21:03.682 2604.269 - 2618.578: 98.3547% ( 48) 00:21:03.682 2618.578 - 2632.887: 98.3993% ( 48) 00:21:03.682 2632.887 - 2647.197: 98.4440% ( 48) 00:21:03.682 2647.197 - 2661.506: 98.4831% ( 42) 00:21:03.682 2661.506 - 2675.815: 98.5240% ( 44) 00:21:03.682 2675.815 - 2690.124: 98.5631% ( 42) 00:21:03.682 2690.124 - 2704.433: 98.6050% ( 45) 00:21:03.682 2704.433 - 2718.742: 98.6441% ( 42) 00:21:03.682 2718.742 - 2733.052: 98.6860% ( 45) 00:21:03.682 2733.052 - 2747.361: 98.7260% ( 43) 00:21:03.682 2747.361 - 2761.670: 98.7679% ( 45) 00:21:03.682 2761.670 - 2775.979: 98.8107% ( 46) 00:21:03.682 2775.979 - 2790.288: 98.8460% ( 38) 00:21:03.682 2790.288 - 2804.597: 98.8870% ( 44) 00:21:03.682 2804.597 - 2818.907: 98.9251% ( 41) 00:21:03.682 2818.907 - 2833.216: 98.9596% ( 37) 00:21:03.682 2833.216 - 2847.525: 98.9977% ( 41) 00:21:03.682 2847.525 - 2861.834: 99.0349% ( 40) 00:21:03.682 2861.834 - 2876.143: 99.0750% ( 43) 00:21:03.682 2876.143 - 2890.452: 99.1122% ( 40) 00:21:03.682 2890.452 - 2904.762: 99.1485% ( 39) 00:21:03.682 2904.762 - 2919.071: 99.1848% ( 39) 00:21:03.682 2919.071 - 2933.380: 99.2211% ( 39) 00:21:03.682 2933.380 - 2947.689: 99.2546% ( 36) 00:21:03.682 2947.689 - 2961.998: 99.2816% ( 29) 00:21:03.682 2961.998 - 2976.307: 99.3076% ( 28) 00:21:03.682 2976.307 - 2990.617: 99.3337% ( 28) 00:21:03.682 2990.617 - 3004.926: 99.3588% ( 27) 00:21:03.682 3004.926 - 3019.235: 99.3821% ( 25) 00:21:03.682 3019.235 - 3033.544: 99.4035% ( 23) 00:21:03.682 3033.544 - 3047.853: 99.4193% ( 17) 00:21:03.682 3047.853 - 3062.162: 99.4360% ( 18) 00:21:03.682 3062.162 - 3076.472: 99.4528% ( 18) 00:21:03.682 3076.472 - 3090.781: 99.4640% ( 12) 00:21:03.682 3090.781 - 3105.090: 99.4779% ( 15) 00:21:03.682 3105.090 - 3119.399: 99.4919% ( 15) 00:21:03.682 3119.399 - 3133.708: 99.5040% ( 13) 00:21:03.682 3133.708 - 3148.017: 99.5170% ( 14) 00:21:03.682 3148.017 - 3162.327: 99.5282% ( 12) 00:21:03.682 3162.327 - 3176.636: 99.5403% ( 13) 00:21:03.682 3176.636 - 3190.945: 99.5514% ( 12) 00:21:03.682 3190.945 - 3205.254: 99.5607% ( 10) 00:21:03.682 3205.254 - 3219.563: 99.5710% ( 11) 00:21:03.682 3219.563 - 3233.872: 99.5803% ( 10) 00:21:03.682 3233.872 - 3248.182: 99.5887% ( 9) 00:21:03.682 3248.182 - 3262.491: 99.5970% ( 9) 00:21:03.682 3262.491 - 3276.800: 99.6045% ( 8) 00:21:03.682 3276.800 - 3291.109: 99.6110% ( 7) 00:21:03.682 3291.109 - 3305.418: 99.6166% ( 6) 00:21:03.682 3305.418 - 3319.728: 99.6231% ( 7) 00:21:03.682 3319.728 - 3334.037: 99.6296% ( 7) 00:21:03.682 3334.037 - 3348.346: 99.6380% ( 9) 00:21:03.682 3348.346 - 3362.655: 99.6454% ( 8) 00:21:03.682 3362.655 - 3376.964: 99.6501% ( 5) 00:21:03.682 3376.964 - 3391.273: 99.6585% ( 9) 00:21:03.682 3391.273 - 3405.583: 99.6650% ( 7) 00:21:03.682 3405.583 - 3419.892: 99.6715% ( 7) 00:21:03.682 3419.892 - 3434.201: 99.6789% ( 8) 00:21:03.682 3434.201 - 3448.510: 99.6864% ( 8) 00:21:03.682 3448.510 - 3462.819: 99.6920% ( 6) 00:21:03.682 3462.819 - 3477.128: 99.6994% ( 8) 00:21:03.682 3477.128 - 3491.438: 99.7069% ( 8) 00:21:03.682 3491.438 - 3505.747: 99.7124% ( 6) 00:21:03.682 3505.747 - 3520.056: 99.7199% ( 8) 00:21:03.682 3520.056 - 3534.365: 99.7273% ( 8) 00:21:03.682 3534.365 - 3548.674: 99.7311% ( 4) 00:21:03.682 3548.674 - 3562.983: 99.7385% ( 8) 00:21:03.682 3562.983 - 3577.293: 99.7478% ( 10) 00:21:03.682 3577.293 - 3591.602: 99.7534% ( 6) 00:21:03.682 3591.602 - 3605.911: 99.7608% ( 8) 00:21:03.682 3605.911 - 3620.220: 99.7683% ( 8) 00:21:03.682 3620.220 - 3634.529: 99.7776% ( 10) 00:21:03.682 3634.529 - 3648.838: 99.7841% ( 7) 00:21:03.682 3648.838 - 3663.148: 99.7915% ( 8) 00:21:03.682 3663.148 - 3691.766: 99.8046% ( 14) 00:21:03.682 3691.766 - 3720.384: 99.8195% ( 16) 00:21:03.682 3720.384 - 3749.003: 99.8325% ( 14) 00:21:03.682 3749.003 - 3777.621: 99.8427% ( 11) 00:21:03.682 3777.621 - 3806.239: 99.8558% ( 14) 00:21:03.682 3806.239 - 3834.858: 99.8660% ( 11) 00:21:03.682 3834.858 - 3863.476: 99.8753% ( 10) 00:21:03.682 3863.476 - 3892.094: 99.8837% ( 9) 00:21:03.682 3892.094 - 3920.713: 99.8939% ( 11) 00:21:03.682 3920.713 - 3949.331: 99.8976% ( 4) 00:21:03.682 3949.331 - 3977.949: 99.9041% ( 7) 00:21:03.682 3977.949 - 4006.568: 99.9079% ( 4) 00:21:03.682 4006.568 - 4035.186: 99.9125% ( 5) 00:21:03.682 4035.186 - 4063.804: 99.9153% ( 3) 00:21:03.682 4063.804 - 4092.423: 99.9190% ( 4) 00:21:03.682 4092.423 - 4121.041: 99.9228% ( 4) 00:21:03.682 4121.041 - 4149.659: 99.9256% ( 3) 00:21:03.682 4149.659 - 4178.278: 99.9293% ( 4) 00:21:03.682 4178.278 - 4206.896: 99.9330% ( 4) 00:21:03.682 4206.896 - 4235.514: 99.9358% ( 3) 00:21:03.682 4235.514 - 4264.133: 99.9376% ( 2) 00:21:03.682 4264.133 - 4292.751: 99.9404% ( 3) 00:21:03.682 4292.751 - 4321.369: 99.9423% ( 2) 00:21:03.682 4321.369 - 4349.988: 99.9460% ( 4) 00:21:03.682 4349.988 - 4378.606: 99.9470% ( 1) 00:21:03.682 4378.606 - 4407.224: 99.9488% ( 2) 00:21:03.682 4407.224 - 4435.843: 99.9497% ( 1) 00:21:03.682 4435.843 - 4464.461: 99.9507% ( 1) 00:21:03.682 4464.461 - 4493.079: 99.9525% ( 2) 00:21:03.682 4493.079 - 4521.698: 99.9535% ( 1) 00:21:03.682 4550.316 - 4578.934: 99.9553% ( 2) 00:21:03.682 4578.934 - 4607.553: 99.9563% ( 1) 00:21:03.682 4607.553 - 4636.171: 99.9572% ( 1) 00:21:03.682 4636.171 - 4664.790: 99.9581% ( 1) 00:21:03.682 4664.790 - 4693.408: 99.9591% ( 1) 00:21:03.682 4693.408 - 4722.026: 99.9609% ( 2) 00:21:03.682 4722.026 - 4750.645: 99.9618% ( 1) 00:21:03.682 4779.263 - 4807.881: 99.9628% ( 1) 00:21:03.682 4807.881 - 4836.500: 99.9646% ( 2) 00:21:03.682 4836.500 - 4865.118: 99.9656% ( 1) 00:21:03.682 4865.118 - 4893.736: 99.9665% ( 1) 00:21:03.682 4893.736 - 4922.355: 99.9674% ( 1) 00:21:03.682 4922.355 - 4950.973: 99.9684% ( 1) 00:21:03.682 4950.973 - 4979.591: 99.9702% ( 2) 00:21:03.682 4979.591 - 5008.210: 99.9712% ( 1) 00:21:03.682 5008.210 - 5036.828: 99.9721% ( 1) 00:21:03.682 5036.828 - 5065.446: 99.9730% ( 1) 00:21:03.682 5065.446 - 5094.065: 99.9749% ( 2) 00:21:03.683 5094.065 - 5122.683: 99.9758% ( 1) 00:21:03.683 5122.683 - 5151.301: 99.9767% ( 1) 00:21:03.683 5151.301 - 5179.920: 99.9777% ( 1) 00:21:03.683 5179.920 - 5208.538: 99.9786% ( 1) 00:21:03.683 5208.538 - 5237.156: 99.9795% ( 1) 00:21:03.683 5265.775 - 5294.393: 99.9805% ( 1) 00:21:03.683 5294.393 - 5323.011: 99.9814% ( 1) 00:21:03.683 5323.011 - 5351.630: 99.9823% ( 1) 00:21:03.683 5351.630 - 5380.248: 99.9832% ( 1) 00:21:03.683 5380.248 - 5408.866: 99.9851% ( 2) 00:21:03.683 5408.866 - 5437.485: 99.9860% ( 1) 00:21:03.683 5437.485 - 5466.103: 99.9870% ( 1) 00:21:03.683 5466.103 - 5494.721: 99.9879% ( 1) 00:21:03.683 5494.721 - 5523.340: 99.9898% ( 2) 00:21:03.683 5523.340 - 5551.958: 99.9907% ( 1) 00:21:03.683 5551.958 - 5580.576: 99.9916% ( 1) 00:21:03.683 5580.576 - 5609.195: 99.9926% ( 1) 00:21:03.683 5609.195 - 5637.813: 99.9935% ( 1) 00:21:03.683 5637.813 - 5666.431: 99.9944% ( 1) 00:21:03.683 5666.431 - 5695.050: 99.9953% ( 1) 00:21:03.683 5695.050 - 5723.668: 99.9963% ( 1) 00:21:03.683 5723.668 - 5752.286: 99.9972% ( 1) 00:21:03.683 5752.286 - 5780.905: 99.9981% ( 1) 00:21:03.683 5780.905 - 5809.523: 99.9991% ( 1) 00:21:03.683 5838.141 - 5866.760: 100.0000% ( 1) 00:21:03.683 00:21:03.683 07:31:07 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:21:05.104 Initializing NVMe Controllers 00:21:05.104 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:05.104 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:05.104 Initialization complete. Launching workers. 00:21:05.104 ======================================================== 00:21:05.104 Latency(us) 00:21:05.104 Device Information : IOPS MiB/s Average min max 00:21:05.104 PCIE (0000:00:10.0) NSID 1 from core 0: 88075.27 1032.13 1452.74 422.63 11797.64 00:21:05.104 ======================================================== 00:21:05.104 Total : 88075.27 1032.13 1452.74 422.63 11797.64 00:21:05.104 00:21:05.104 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:05.104 ================================================================================= 00:21:05.104 1.00000% : 847.818us 00:21:05.104 10.00000% : 1080.342us 00:21:05.104 25.00000% : 1209.125us 00:21:05.104 50.00000% : 1373.680us 00:21:05.104 75.00000% : 1552.545us 00:21:05.104 90.00000% : 1874.501us 00:21:05.104 95.00000% : 2275.158us 00:21:05.104 98.00000% : 2847.525us 00:21:05.104 99.00000% : 3190.945us 00:21:05.104 99.50000% : 3562.983us 00:21:05.104 99.90000% : 5466.103us 00:21:05.104 99.99000% : 10245.366us 00:21:05.104 99.99900% : 11847.993us 00:21:05.104 99.99990% : 11847.993us 00:21:05.104 99.99999% : 11847.993us 00:21:05.104 00:21:05.104 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:05.104 ============================================================================== 00:21:05.104 Range in us Cumulative IO count 00:21:05.104 422.121 - 423.909: 0.0011% ( 1) 00:21:05.104 423.909 - 425.698: 0.0023% ( 1) 00:21:05.104 425.698 - 427.486: 0.0034% ( 1) 00:21:05.104 427.486 - 429.275: 0.0045% ( 1) 00:21:05.104 440.007 - 441.796: 0.0057% ( 1) 00:21:05.104 448.950 - 450.739: 0.0079% ( 2) 00:21:05.104 457.893 - 461.471: 0.0091% ( 1) 00:21:05.104 461.471 - 465.048: 0.0114% ( 2) 00:21:05.104 465.048 - 468.625: 0.0136% ( 2) 00:21:05.104 468.625 - 472.203: 0.0148% ( 1) 00:21:05.104 472.203 - 475.780: 0.0159% ( 1) 00:21:05.105 479.357 - 482.934: 0.0170% ( 1) 00:21:05.105 482.934 - 486.512: 0.0182% ( 1) 00:21:05.105 493.666 - 497.244: 0.0204% ( 2) 00:21:05.105 500.821 - 504.398: 0.0238% ( 3) 00:21:05.105 507.976 - 511.553: 0.0284% ( 4) 00:21:05.105 515.130 - 518.707: 0.0306% ( 2) 00:21:05.105 518.707 - 522.285: 0.0318% ( 1) 00:21:05.105 522.285 - 525.862: 0.0352% ( 3) 00:21:05.105 525.862 - 529.439: 0.0363% ( 1) 00:21:05.105 529.439 - 533.017: 0.0386% ( 2) 00:21:05.105 533.017 - 536.594: 0.0409% ( 2) 00:21:05.105 536.594 - 540.171: 0.0431% ( 2) 00:21:05.105 540.171 - 543.748: 0.0454% ( 2) 00:21:05.105 543.748 - 547.326: 0.0511% ( 5) 00:21:05.105 547.326 - 550.903: 0.0522% ( 1) 00:21:05.105 550.903 - 554.480: 0.0568% ( 4) 00:21:05.105 554.480 - 558.058: 0.0590% ( 2) 00:21:05.105 558.058 - 561.635: 0.0602% ( 1) 00:21:05.105 561.635 - 565.212: 0.0613% ( 1) 00:21:05.105 565.212 - 568.790: 0.0636% ( 2) 00:21:05.105 568.790 - 572.367: 0.0692% ( 5) 00:21:05.105 575.944 - 579.521: 0.0749% ( 5) 00:21:05.105 579.521 - 583.099: 0.0829% ( 7) 00:21:05.105 583.099 - 586.676: 0.0840% ( 1) 00:21:05.105 586.676 - 590.253: 0.0874% ( 3) 00:21:05.105 590.253 - 593.831: 0.0908% ( 3) 00:21:05.105 593.831 - 597.408: 0.0931% ( 2) 00:21:05.105 597.408 - 600.985: 0.1022% ( 8) 00:21:05.105 600.985 - 604.562: 0.1078% ( 5) 00:21:05.105 604.562 - 608.140: 0.1101% ( 2) 00:21:05.105 608.140 - 611.717: 0.1112% ( 1) 00:21:05.105 611.717 - 615.294: 0.1158% ( 4) 00:21:05.105 615.294 - 618.872: 0.1181% ( 2) 00:21:05.105 618.872 - 622.449: 0.1203% ( 2) 00:21:05.105 622.449 - 626.026: 0.1249% ( 4) 00:21:05.105 626.026 - 629.603: 0.1294% ( 4) 00:21:05.105 629.603 - 633.181: 0.1340% ( 4) 00:21:05.105 633.181 - 636.758: 0.1374% ( 3) 00:21:05.105 636.758 - 640.335: 0.1430% ( 5) 00:21:05.105 640.335 - 643.913: 0.1510% ( 7) 00:21:05.105 643.913 - 647.490: 0.1544% ( 3) 00:21:05.105 647.490 - 651.067: 0.1601% ( 5) 00:21:05.105 651.067 - 654.645: 0.1657% ( 5) 00:21:05.105 654.645 - 658.222: 0.1691% ( 3) 00:21:05.105 658.222 - 661.799: 0.1748% ( 5) 00:21:05.105 661.799 - 665.376: 0.1771% ( 2) 00:21:05.105 665.376 - 668.954: 0.1850% ( 7) 00:21:05.105 668.954 - 672.531: 0.1930% ( 7) 00:21:05.105 672.531 - 676.108: 0.1987% ( 5) 00:21:05.105 676.108 - 679.686: 0.2066% ( 7) 00:21:05.105 679.686 - 683.263: 0.2100% ( 3) 00:21:05.105 683.263 - 686.840: 0.2168% ( 6) 00:21:05.105 686.840 - 690.417: 0.2248% ( 7) 00:21:05.105 690.417 - 693.995: 0.2304% ( 5) 00:21:05.105 693.995 - 697.572: 0.2407% ( 9) 00:21:05.105 697.572 - 701.149: 0.2475% ( 6) 00:21:05.105 701.149 - 704.727: 0.2497% ( 2) 00:21:05.105 704.727 - 708.304: 0.2543% ( 4) 00:21:05.105 708.304 - 711.881: 0.2565% ( 2) 00:21:05.105 711.881 - 715.459: 0.2645% ( 7) 00:21:05.105 715.459 - 719.036: 0.2702% ( 5) 00:21:05.105 719.036 - 722.613: 0.2770% ( 6) 00:21:05.105 722.613 - 726.190: 0.2838% ( 6) 00:21:05.105 726.190 - 729.768: 0.2963% ( 11) 00:21:05.105 729.768 - 733.345: 0.3099% ( 12) 00:21:05.105 733.345 - 736.922: 0.3178% ( 7) 00:21:05.105 736.922 - 740.500: 0.3281% ( 9) 00:21:05.105 740.500 - 744.077: 0.3360% ( 7) 00:21:05.105 744.077 - 747.654: 0.3496% ( 12) 00:21:05.105 747.654 - 751.231: 0.3599% ( 9) 00:21:05.105 751.231 - 754.809: 0.3723% ( 11) 00:21:05.105 754.809 - 758.386: 0.3803% ( 7) 00:21:05.105 758.386 - 761.963: 0.3928% ( 11) 00:21:05.105 761.963 - 765.541: 0.4030% ( 9) 00:21:05.105 765.541 - 769.118: 0.4189% ( 14) 00:21:05.105 769.118 - 772.695: 0.4336% ( 13) 00:21:05.105 772.695 - 776.272: 0.4461% ( 11) 00:21:05.105 776.272 - 779.850: 0.4586% ( 11) 00:21:05.105 779.850 - 783.427: 0.4779% ( 17) 00:21:05.105 783.427 - 787.004: 0.4961% ( 16) 00:21:05.105 787.004 - 790.582: 0.5131% ( 15) 00:21:05.105 790.582 - 794.159: 0.5324% ( 17) 00:21:05.105 794.159 - 797.736: 0.5517% ( 17) 00:21:05.105 797.736 - 801.314: 0.5767% ( 22) 00:21:05.105 801.314 - 804.891: 0.6028% ( 23) 00:21:05.105 804.891 - 808.468: 0.6255% ( 20) 00:21:05.105 808.468 - 812.045: 0.6493% ( 21) 00:21:05.105 812.045 - 815.623: 0.6777% ( 25) 00:21:05.105 815.623 - 819.200: 0.7174% ( 35) 00:21:05.105 819.200 - 822.777: 0.7390% ( 19) 00:21:05.105 822.777 - 826.355: 0.7765% ( 33) 00:21:05.105 826.355 - 829.932: 0.8105% ( 30) 00:21:05.105 829.932 - 833.509: 0.8468% ( 32) 00:21:05.105 833.509 - 837.086: 0.8854% ( 34) 00:21:05.105 837.086 - 840.664: 0.9286% ( 38) 00:21:05.105 840.664 - 844.241: 0.9706% ( 37) 00:21:05.105 844.241 - 847.818: 1.0217% ( 45) 00:21:05.105 847.818 - 851.396: 1.0682% ( 41) 00:21:05.105 851.396 - 854.973: 1.1170% ( 43) 00:21:05.105 854.973 - 858.550: 1.1670% ( 44) 00:21:05.105 858.550 - 862.128: 1.2192% ( 46) 00:21:05.105 862.128 - 865.705: 1.2714% ( 46) 00:21:05.105 865.705 - 869.282: 1.3304% ( 52) 00:21:05.105 869.282 - 872.859: 1.3849% ( 48) 00:21:05.105 872.859 - 876.437: 1.4451% ( 53) 00:21:05.105 876.437 - 880.014: 1.5030% ( 51) 00:21:05.105 880.014 - 883.591: 1.5722% ( 61) 00:21:05.105 883.591 - 887.169: 1.6358% ( 56) 00:21:05.105 887.169 - 890.746: 1.7016% ( 58) 00:21:05.105 890.746 - 894.323: 1.7754% ( 65) 00:21:05.105 894.323 - 897.900: 1.8254% ( 44) 00:21:05.105 897.900 - 901.478: 1.9128% ( 77) 00:21:05.105 901.478 - 905.055: 1.9809% ( 60) 00:21:05.105 905.055 - 908.632: 2.0547% ( 65) 00:21:05.105 908.632 - 912.210: 2.1364% ( 72) 00:21:05.105 912.210 - 915.787: 2.2249% ( 78) 00:21:05.105 915.787 - 922.941: 2.3907% ( 146) 00:21:05.105 922.941 - 930.096: 2.5496% ( 140) 00:21:05.105 930.096 - 937.251: 2.7494% ( 176) 00:21:05.105 937.251 - 944.405: 2.9333% ( 162) 00:21:05.105 944.405 - 951.560: 3.1422% ( 184) 00:21:05.105 951.560 - 958.714: 3.3556% ( 188) 00:21:05.105 958.714 - 965.869: 3.5849% ( 202) 00:21:05.105 965.869 - 973.024: 3.8176% ( 205) 00:21:05.105 973.024 - 980.178: 4.0946% ( 244) 00:21:05.105 980.178 - 987.333: 4.3943% ( 264) 00:21:05.105 987.333 - 994.487: 4.6917% ( 262) 00:21:05.105 994.487 - 1001.642: 5.0379% ( 305) 00:21:05.105 1001.642 - 1008.797: 5.3978% ( 317) 00:21:05.105 1008.797 - 1015.951: 5.7542% ( 314) 00:21:05.105 1015.951 - 1023.106: 6.1458% ( 345) 00:21:05.105 1023.106 - 1030.260: 6.5840% ( 386) 00:21:05.105 1030.260 - 1037.415: 7.0733% ( 431) 00:21:05.105 1037.415 - 1044.569: 7.5750% ( 442) 00:21:05.105 1044.569 - 1051.724: 8.0722% ( 438) 00:21:05.105 1051.724 - 1058.879: 8.5899% ( 456) 00:21:05.105 1058.879 - 1066.033: 9.1575% ( 500) 00:21:05.105 1066.033 - 1073.188: 9.7455% ( 518) 00:21:05.105 1073.188 - 1080.342: 10.3642% ( 545) 00:21:05.105 1080.342 - 1087.497: 10.9647% ( 529) 00:21:05.105 1087.497 - 1094.652: 11.5652% ( 529) 00:21:05.105 1094.652 - 1101.806: 12.2327% ( 588) 00:21:05.105 1101.806 - 1108.961: 12.9603% ( 641) 00:21:05.105 1108.961 - 1116.115: 13.6391% ( 598) 00:21:05.105 1116.115 - 1123.270: 14.4463% ( 711) 00:21:05.106 1123.270 - 1130.424: 15.3056% ( 757) 00:21:05.106 1130.424 - 1137.579: 16.0900% ( 691) 00:21:05.106 1137.579 - 1144.734: 16.9085% ( 721) 00:21:05.106 1144.734 - 1151.888: 17.7235% ( 718) 00:21:05.106 1151.888 - 1159.043: 18.5953% ( 768) 00:21:05.106 1159.043 - 1166.197: 19.4149% ( 722) 00:21:05.106 1166.197 - 1173.352: 20.3503% ( 824) 00:21:05.106 1173.352 - 1180.507: 21.3515% ( 882) 00:21:05.106 1180.507 - 1187.661: 22.3641% ( 892) 00:21:05.106 1187.661 - 1194.816: 23.4755% ( 979) 00:21:05.106 1194.816 - 1201.970: 24.5777% ( 971) 00:21:05.106 1201.970 - 1209.125: 25.6016% ( 902) 00:21:05.106 1209.125 - 1216.279: 26.5756% ( 858) 00:21:05.106 1216.279 - 1223.434: 27.6166% ( 917) 00:21:05.106 1223.434 - 1230.589: 28.6382% ( 900) 00:21:05.106 1230.589 - 1237.743: 29.6837% ( 921) 00:21:05.106 1237.743 - 1244.898: 30.7077% ( 902) 00:21:05.106 1244.898 - 1252.052: 31.9144% ( 1063) 00:21:05.106 1252.052 - 1259.207: 33.1551% ( 1093) 00:21:05.106 1259.207 - 1266.362: 34.3936% ( 1091) 00:21:05.106 1266.362 - 1273.516: 35.4175% ( 902) 00:21:05.106 1273.516 - 1280.671: 36.6231% ( 1062) 00:21:05.106 1280.671 - 1287.825: 37.6879% ( 938) 00:21:05.106 1287.825 - 1294.980: 38.8889% ( 1058) 00:21:05.106 1294.980 - 1302.134: 39.9560% ( 940) 00:21:05.106 1302.134 - 1309.289: 40.9697% ( 893) 00:21:05.106 1309.289 - 1316.444: 41.9698% ( 881) 00:21:05.106 1316.444 - 1323.598: 43.0947% ( 991) 00:21:05.106 1323.598 - 1330.753: 44.2469% ( 1015) 00:21:05.106 1330.753 - 1337.907: 45.3197% ( 945) 00:21:05.106 1337.907 - 1345.062: 46.3629% ( 919) 00:21:05.106 1345.062 - 1352.217: 47.3721% ( 889) 00:21:05.106 1352.217 - 1359.371: 48.5663% ( 1052) 00:21:05.106 1359.371 - 1366.526: 49.5641% ( 879) 00:21:05.106 1366.526 - 1373.680: 50.5449% ( 864) 00:21:05.106 1373.680 - 1380.835: 51.6074% ( 936) 00:21:05.106 1380.835 - 1387.990: 52.7835% ( 1036) 00:21:05.106 1387.990 - 1395.144: 54.0753% ( 1138) 00:21:05.106 1395.144 - 1402.299: 55.1753% ( 969) 00:21:05.106 1402.299 - 1409.453: 56.2355% ( 934) 00:21:05.106 1409.453 - 1416.608: 57.1539% ( 809) 00:21:05.106 1416.608 - 1423.762: 58.1665% ( 892) 00:21:05.106 1423.762 - 1430.917: 59.2778% ( 979) 00:21:05.106 1430.917 - 1438.072: 60.2348% ( 843) 00:21:05.106 1438.072 - 1445.226: 61.2757% ( 917) 00:21:05.106 1445.226 - 1452.381: 62.2293% ( 840) 00:21:05.106 1452.381 - 1459.535: 63.2407% ( 891) 00:21:05.106 1459.535 - 1466.690: 64.2578% ( 896) 00:21:05.106 1466.690 - 1473.845: 65.2466% ( 871) 00:21:05.106 1473.845 - 1480.999: 66.3080% ( 935) 00:21:05.106 1480.999 - 1488.154: 67.3058% ( 879) 00:21:05.106 1488.154 - 1495.308: 68.2763% ( 855) 00:21:05.106 1495.308 - 1502.463: 69.2844% ( 888) 00:21:05.106 1502.463 - 1509.617: 70.3424% ( 932) 00:21:05.106 1509.617 - 1516.772: 71.3470% ( 885) 00:21:05.106 1516.772 - 1523.927: 72.2222% ( 771) 00:21:05.106 1523.927 - 1531.081: 73.0237% ( 706) 00:21:05.106 1531.081 - 1538.236: 73.8569% ( 734) 00:21:05.106 1538.236 - 1545.390: 74.6209% ( 673) 00:21:05.106 1545.390 - 1552.545: 75.2724% ( 574) 00:21:05.106 1552.545 - 1559.700: 75.9581% ( 604) 00:21:05.106 1559.700 - 1566.854: 76.5699% ( 539) 00:21:05.106 1566.854 - 1574.009: 77.2193% ( 572) 00:21:05.106 1574.009 - 1581.163: 77.8255% ( 534) 00:21:05.106 1581.163 - 1588.318: 78.4907% ( 586) 00:21:05.106 1588.318 - 1595.472: 79.0446% ( 488) 00:21:05.106 1595.472 - 1602.627: 79.6497% ( 533) 00:21:05.106 1602.627 - 1609.782: 80.1242% ( 418) 00:21:05.106 1609.782 - 1616.936: 80.5964% ( 416) 00:21:05.106 1616.936 - 1624.091: 81.0528% ( 402) 00:21:05.106 1624.091 - 1631.245: 81.4308% ( 333) 00:21:05.106 1631.245 - 1638.400: 81.8633% ( 381) 00:21:05.106 1638.400 - 1645.555: 82.2004% ( 297) 00:21:05.106 1645.555 - 1652.709: 82.5467% ( 305) 00:21:05.106 1652.709 - 1659.864: 82.9235% ( 332) 00:21:05.106 1659.864 - 1667.018: 83.3254% ( 354) 00:21:05.106 1667.018 - 1674.173: 83.6603% ( 295) 00:21:05.106 1674.173 - 1681.328: 84.0122% ( 310) 00:21:05.106 1681.328 - 1688.482: 84.3845% ( 328) 00:21:05.106 1688.482 - 1695.637: 84.7285% ( 303) 00:21:05.106 1695.637 - 1702.791: 85.0747% ( 305) 00:21:05.106 1702.791 - 1709.946: 85.4266% ( 310) 00:21:05.106 1709.946 - 1717.100: 85.7706% ( 303) 00:21:05.106 1717.100 - 1724.255: 86.0884% ( 280) 00:21:05.106 1724.255 - 1731.410: 86.4392% ( 309) 00:21:05.106 1731.410 - 1738.564: 86.7241% ( 251) 00:21:05.106 1738.564 - 1745.719: 86.9409% ( 191) 00:21:05.106 1745.719 - 1752.873: 87.1895% ( 219) 00:21:05.106 1752.873 - 1760.028: 87.4075% ( 192) 00:21:05.106 1760.028 - 1767.183: 87.6901% ( 249) 00:21:05.106 1767.183 - 1774.337: 87.9705% ( 247) 00:21:05.106 1774.337 - 1781.492: 88.1885% ( 192) 00:21:05.106 1781.492 - 1788.646: 88.3678% ( 158) 00:21:05.106 1788.646 - 1795.801: 88.5290% ( 142) 00:21:05.106 1795.801 - 1802.955: 88.6789% ( 132) 00:21:05.106 1802.955 - 1810.110: 88.8083% ( 114) 00:21:05.106 1810.110 - 1817.265: 88.9491% ( 124) 00:21:05.106 1817.265 - 1824.419: 89.0989% ( 132) 00:21:05.106 1824.419 - 1831.574: 89.3282% ( 202) 00:21:05.106 1831.574 - 1845.883: 89.6472% ( 281) 00:21:05.106 1845.883 - 1860.192: 89.9378% ( 256) 00:21:05.106 1860.192 - 1874.501: 90.1887% ( 221) 00:21:05.106 1874.501 - 1888.810: 90.4282% ( 211) 00:21:05.106 1888.810 - 1903.120: 90.6598% ( 204) 00:21:05.106 1903.120 - 1917.429: 90.8800% ( 194) 00:21:05.106 1917.429 - 1931.738: 91.1150% ( 207) 00:21:05.106 1931.738 - 1946.047: 91.3715% ( 226) 00:21:05.106 1946.047 - 1960.356: 91.6326% ( 230) 00:21:05.106 1960.356 - 1974.666: 91.8858% ( 223) 00:21:05.106 1974.666 - 1988.975: 92.1287% ( 214) 00:21:05.106 1988.975 - 2003.284: 92.3239% ( 172) 00:21:05.106 2003.284 - 2017.593: 92.5589% ( 207) 00:21:05.106 2017.593 - 2031.902: 92.7689% ( 185) 00:21:05.106 2031.902 - 2046.211: 92.9540% ( 163) 00:21:05.106 2046.211 - 2060.521: 93.1106% ( 138) 00:21:05.106 2060.521 - 2074.830: 93.2854% ( 154) 00:21:05.106 2074.830 - 2089.139: 93.4705% ( 163) 00:21:05.106 2089.139 - 2103.448: 93.6373% ( 147) 00:21:05.106 2103.448 - 2117.757: 93.7588% ( 107) 00:21:05.106 2117.757 - 2132.066: 93.8871% ( 113) 00:21:05.106 2132.066 - 2146.376: 93.9870% ( 88) 00:21:05.106 2146.376 - 2160.685: 94.1107% ( 109) 00:21:05.106 2160.685 - 2174.994: 94.2219% ( 98) 00:21:05.106 2174.994 - 2189.303: 94.3514% ( 114) 00:21:05.106 2189.303 - 2203.612: 94.4796% ( 113) 00:21:05.106 2203.612 - 2217.921: 94.6136% ( 118) 00:21:05.106 2217.921 - 2232.231: 94.7339% ( 106) 00:21:05.106 2232.231 - 2246.540: 94.8588% ( 110) 00:21:05.106 2246.540 - 2260.849: 94.9848% ( 111) 00:21:05.106 2260.849 - 2275.158: 95.0915% ( 94) 00:21:05.106 2275.158 - 2289.467: 95.2504% ( 140) 00:21:05.106 2289.467 - 2303.776: 95.3571% ( 94) 00:21:05.106 2303.776 - 2318.086: 95.4559% ( 87) 00:21:05.106 2318.086 - 2332.395: 95.5774% ( 107) 00:21:05.106 2332.395 - 2346.704: 95.6965% ( 105) 00:21:05.106 2346.704 - 2361.013: 95.8657% ( 149) 00:21:05.106 2361.013 - 2375.322: 95.9996% ( 118) 00:21:05.106 2375.322 - 2389.631: 96.0643% ( 57) 00:21:05.106 2389.631 - 2403.941: 96.1654% ( 89) 00:21:05.106 2403.941 - 2418.250: 96.2335% ( 60) 00:21:05.106 2418.250 - 2432.559: 96.2914% ( 51) 00:21:05.106 2432.559 - 2446.868: 96.3618% ( 62) 00:21:05.106 2446.868 - 2461.177: 96.4219% ( 53) 00:21:05.106 2461.177 - 2475.486: 96.4730% ( 45) 00:21:05.106 2475.486 - 2489.796: 96.5400% ( 59) 00:21:05.106 2489.796 - 2504.105: 96.6001% ( 53) 00:21:05.106 2504.105 - 2518.414: 96.6683% ( 60) 00:21:05.106 2518.414 - 2532.723: 96.7352% ( 59) 00:21:05.106 2532.723 - 2547.032: 96.7897% ( 48) 00:21:05.106 2547.032 - 2561.341: 96.8419% ( 46) 00:21:05.106 2561.341 - 2575.651: 96.8896% ( 42) 00:21:05.106 2575.651 - 2589.960: 96.9441% ( 48) 00:21:05.106 2589.960 - 2604.269: 96.9895% ( 40) 00:21:05.106 2604.269 - 2618.578: 97.0463% ( 50) 00:21:05.106 2618.578 - 2632.887: 97.0917% ( 40) 00:21:05.106 2632.887 - 2647.197: 97.1405% ( 43) 00:21:05.106 2647.197 - 2661.506: 97.2007% ( 53) 00:21:05.106 2661.506 - 2675.815: 97.2631% ( 55) 00:21:05.106 2675.815 - 2690.124: 97.3210% ( 51) 00:21:05.106 2690.124 - 2704.433: 97.3846% ( 56) 00:21:05.106 2704.433 - 2718.742: 97.4606% ( 67) 00:21:05.106 2718.742 - 2733.052: 97.5310% ( 62) 00:21:05.106 2733.052 - 2747.361: 97.6025% ( 63) 00:21:05.106 2747.361 - 2761.670: 97.6683% ( 58) 00:21:05.106 2761.670 - 2775.979: 97.7342% ( 58) 00:21:05.106 2775.979 - 2790.288: 97.7909% ( 50) 00:21:05.106 2790.288 - 2804.597: 97.8568% ( 58) 00:21:05.106 2804.597 - 2818.907: 97.9192% ( 55) 00:21:05.106 2818.907 - 2833.216: 97.9839% ( 57) 00:21:05.106 2833.216 - 2847.525: 98.0259% ( 37) 00:21:05.106 2847.525 - 2861.834: 98.0747% ( 43) 00:21:05.106 2861.834 - 2876.143: 98.1179% ( 38) 00:21:05.106 2876.143 - 2890.452: 98.1633% ( 40) 00:21:05.107 2890.452 - 2904.762: 98.2132% ( 44) 00:21:05.107 2904.762 - 2919.071: 98.2689% ( 49) 00:21:05.107 2919.071 - 2933.380: 98.3120% ( 38) 00:21:05.107 2933.380 - 2947.689: 98.3483% ( 32) 00:21:05.107 2947.689 - 2961.998: 98.3858% ( 33) 00:21:05.107 2961.998 - 2976.307: 98.4255% ( 35) 00:21:05.107 2976.307 - 2990.617: 98.4686% ( 38) 00:21:05.107 2990.617 - 3004.926: 98.5345% ( 58) 00:21:05.107 3004.926 - 3019.235: 98.5947% ( 53) 00:21:05.107 3019.235 - 3033.544: 98.6412% ( 41) 00:21:05.107 3033.544 - 3047.853: 98.7093% ( 60) 00:21:05.107 3047.853 - 3062.162: 98.7558% ( 41) 00:21:05.107 3062.162 - 3076.472: 98.7990% ( 38) 00:21:05.107 3076.472 - 3090.781: 98.8433% ( 39) 00:21:05.107 3090.781 - 3105.090: 98.8705% ( 24) 00:21:05.107 3105.090 - 3119.399: 98.8989% ( 25) 00:21:05.107 3119.399 - 3133.708: 98.9261% ( 24) 00:21:05.107 3133.708 - 3148.017: 98.9511% ( 22) 00:21:05.107 3148.017 - 3162.327: 98.9727% ( 19) 00:21:05.107 3162.327 - 3176.636: 98.9976% ( 22) 00:21:05.107 3176.636 - 3190.945: 99.0147% ( 15) 00:21:05.107 3190.945 - 3205.254: 99.0374% ( 20) 00:21:05.107 3205.254 - 3219.563: 99.0567% ( 17) 00:21:05.107 3219.563 - 3233.872: 99.0737% ( 15) 00:21:05.107 3233.872 - 3248.182: 99.0975% ( 21) 00:21:05.107 3248.182 - 3262.491: 99.1168% ( 17) 00:21:05.107 3262.491 - 3276.800: 99.1316% ( 13) 00:21:05.107 3276.800 - 3291.109: 99.1520% ( 18) 00:21:05.107 3291.109 - 3305.418: 99.1691% ( 15) 00:21:05.107 3305.418 - 3319.728: 99.1872% ( 16) 00:21:05.107 3319.728 - 3334.037: 99.2065% ( 17) 00:21:05.107 3334.037 - 3348.346: 99.2201% ( 12) 00:21:05.107 3348.346 - 3362.655: 99.2383% ( 16) 00:21:05.107 3362.655 - 3376.964: 99.2576% ( 17) 00:21:05.107 3376.964 - 3391.273: 99.2746% ( 15) 00:21:05.107 3391.273 - 3405.583: 99.2916% ( 15) 00:21:05.107 3405.583 - 3419.892: 99.3098% ( 16) 00:21:05.107 3419.892 - 3434.201: 99.3280% ( 16) 00:21:05.107 3434.201 - 3448.510: 99.3473% ( 17) 00:21:05.107 3448.510 - 3462.819: 99.3688% ( 19) 00:21:05.107 3462.819 - 3477.128: 99.3870% ( 16) 00:21:05.107 3477.128 - 3491.438: 99.4097% ( 20) 00:21:05.107 3491.438 - 3505.747: 99.4358% ( 23) 00:21:05.107 3505.747 - 3520.056: 99.4517% ( 14) 00:21:05.107 3520.056 - 3534.365: 99.4744% ( 20) 00:21:05.107 3534.365 - 3548.674: 99.4937% ( 17) 00:21:05.107 3548.674 - 3562.983: 99.5175% ( 21) 00:21:05.107 3562.983 - 3577.293: 99.5414% ( 21) 00:21:05.107 3577.293 - 3591.602: 99.5550% ( 12) 00:21:05.107 3591.602 - 3605.911: 99.5754% ( 18) 00:21:05.107 3605.911 - 3620.220: 99.5902% ( 13) 00:21:05.107 3620.220 - 3634.529: 99.6027% ( 11) 00:21:05.107 3634.529 - 3648.838: 99.6197% ( 15) 00:21:05.107 3648.838 - 3663.148: 99.6333% ( 12) 00:21:05.107 3663.148 - 3691.766: 99.6560% ( 20) 00:21:05.107 3691.766 - 3720.384: 99.6776% ( 19) 00:21:05.107 3720.384 - 3749.003: 99.6946% ( 15) 00:21:05.107 3749.003 - 3777.621: 99.7071% ( 11) 00:21:05.107 3777.621 - 3806.239: 99.7173% ( 9) 00:21:05.107 3806.239 - 3834.858: 99.7287% ( 10) 00:21:05.107 3834.858 - 3863.476: 99.7389% ( 9) 00:21:05.107 3863.476 - 3892.094: 99.7469% ( 7) 00:21:05.107 3892.094 - 3920.713: 99.7582% ( 10) 00:21:05.107 3920.713 - 3949.331: 99.7673% ( 8) 00:21:05.107 3949.331 - 3977.949: 99.7752% ( 7) 00:21:05.107 3977.949 - 4006.568: 99.7809% ( 5) 00:21:05.107 4006.568 - 4035.186: 99.7877% ( 6) 00:21:05.107 4035.186 - 4063.804: 99.7923% ( 4) 00:21:05.107 4063.804 - 4092.423: 99.7979% ( 5) 00:21:05.107 4092.423 - 4121.041: 99.8013% ( 3) 00:21:05.107 4121.041 - 4149.659: 99.8047% ( 3) 00:21:05.107 4149.659 - 4178.278: 99.8116% ( 6) 00:21:05.107 4178.278 - 4206.896: 99.8195% ( 7) 00:21:05.107 4206.896 - 4235.514: 99.8240% ( 4) 00:21:05.107 4235.514 - 4264.133: 99.8286% ( 4) 00:21:05.107 4264.133 - 4292.751: 99.8331% ( 4) 00:21:05.107 4321.369 - 4349.988: 99.8365% ( 3) 00:21:05.107 4349.988 - 4378.606: 99.8377% ( 1) 00:21:05.107 4378.606 - 4407.224: 99.8399% ( 2) 00:21:05.107 4407.224 - 4435.843: 99.8422% ( 2) 00:21:05.107 4435.843 - 4464.461: 99.8468% ( 4) 00:21:05.107 4464.461 - 4493.079: 99.8490% ( 2) 00:21:05.107 4493.079 - 4521.698: 99.8513% ( 2) 00:21:05.107 4521.698 - 4550.316: 99.8524% ( 1) 00:21:05.107 4550.316 - 4578.934: 99.8547% ( 2) 00:21:05.107 4578.934 - 4607.553: 99.8558% ( 1) 00:21:05.107 4607.553 - 4636.171: 99.8626% ( 6) 00:21:05.107 4636.171 - 4664.790: 99.8660% ( 3) 00:21:05.107 4664.790 - 4693.408: 99.8683% ( 2) 00:21:05.107 4693.408 - 4722.026: 99.8706% ( 2) 00:21:05.107 4722.026 - 4750.645: 99.8751% ( 4) 00:21:05.107 4750.645 - 4779.263: 99.8774% ( 2) 00:21:05.107 4836.500 - 4865.118: 99.8808% ( 3) 00:21:05.107 4865.118 - 4893.736: 99.8819% ( 1) 00:21:05.107 5036.828 - 5065.446: 99.8831% ( 1) 00:21:05.107 5094.065 - 5122.683: 99.8842% ( 1) 00:21:05.107 5122.683 - 5151.301: 99.8865% ( 2) 00:21:05.107 5151.301 - 5179.920: 99.8876% ( 1) 00:21:05.107 5208.538 - 5237.156: 99.8888% ( 1) 00:21:05.107 5294.393 - 5323.011: 99.8910% ( 2) 00:21:05.107 5323.011 - 5351.630: 99.8956% ( 4) 00:21:05.107 5408.866 - 5437.485: 99.8978% ( 2) 00:21:05.107 5437.485 - 5466.103: 99.9001% ( 2) 00:21:05.107 5466.103 - 5494.721: 99.9012% ( 1) 00:21:05.107 5494.721 - 5523.340: 99.9035% ( 2) 00:21:05.107 5523.340 - 5551.958: 99.9046% ( 1) 00:21:05.107 5551.958 - 5580.576: 99.9069% ( 2) 00:21:05.107 5666.431 - 5695.050: 99.9092% ( 2) 00:21:05.107 5695.050 - 5723.668: 99.9115% ( 2) 00:21:05.107 5723.668 - 5752.286: 99.9126% ( 1) 00:21:05.107 5780.905 - 5809.523: 99.9137% ( 1) 00:21:05.107 5838.141 - 5866.760: 99.9149% ( 1) 00:21:05.107 5895.378 - 5923.997: 99.9160% ( 1) 00:21:05.107 5923.997 - 5952.615: 99.9171% ( 1) 00:21:05.107 6009.852 - 6038.470: 99.9183% ( 1) 00:21:05.107 6038.470 - 6067.088: 99.9194% ( 1) 00:21:05.107 6067.088 - 6095.707: 99.9217% ( 2) 00:21:05.107 6410.508 - 6439.127: 99.9228% ( 1) 00:21:05.107 6610.837 - 6639.455: 99.9239% ( 1) 00:21:05.107 6639.455 - 6668.073: 99.9273% ( 3) 00:21:05.107 6668.073 - 6696.692: 99.9285% ( 1) 00:21:05.107 6925.638 - 6954.257: 99.9296% ( 1) 00:21:05.107 7011.493 - 7040.112: 99.9308% ( 1) 00:21:05.107 7211.822 - 7240.440: 99.9319% ( 1) 00:21:05.107 7269.059 - 7297.677: 99.9330% ( 1) 00:21:05.107 7440.769 - 7498.005: 99.9342% ( 1) 00:21:05.107 8013.135 - 8070.372: 99.9364% ( 2) 00:21:05.107 8184.845 - 8242.082: 99.9376% ( 1) 00:21:05.107 8413.792 - 8471.029: 99.9387% ( 1) 00:21:05.107 8757.212 - 8814.449: 99.9398% ( 1) 00:21:05.107 8814.449 - 8871.686: 99.9410% ( 1) 00:21:05.107 8871.686 - 8928.922: 99.9444% ( 3) 00:21:05.107 8928.922 - 8986.159: 99.9466% ( 2) 00:21:05.107 8986.159 - 9043.396: 99.9478% ( 1) 00:21:05.107 9100.632 - 9157.869: 99.9489% ( 1) 00:21:05.107 9157.869 - 9215.106: 99.9523% ( 3) 00:21:05.107 9215.106 - 9272.342: 99.9569% ( 4) 00:21:05.107 9386.816 - 9444.052: 99.9580% ( 1) 00:21:05.107 9501.289 - 9558.526: 99.9591% ( 1) 00:21:05.107 9558.526 - 9615.762: 99.9614% ( 2) 00:21:05.107 9615.762 - 9672.999: 99.9637% ( 2) 00:21:05.107 9672.999 - 9730.236: 99.9648% ( 1) 00:21:05.107 9730.236 - 9787.472: 99.9694% ( 4) 00:21:05.107 9787.472 - 9844.709: 99.9728% ( 3) 00:21:05.107 9844.709 - 9901.946: 99.9807% ( 7) 00:21:05.107 9901.946 - 9959.183: 99.9830% ( 2) 00:21:05.107 9959.183 - 10016.419: 99.9841% ( 1) 00:21:05.107 10073.656 - 10130.893: 99.9864% ( 2) 00:21:05.107 10130.893 - 10188.129: 99.9875% ( 1) 00:21:05.107 10188.129 - 10245.366: 99.9909% ( 3) 00:21:05.108 10359.839 - 10417.076: 99.9921% ( 1) 00:21:05.108 10531.549 - 10588.786: 99.9932% ( 1) 00:21:05.108 10874.969 - 10932.206: 99.9943% ( 1) 00:21:05.108 10989.443 - 11046.679: 99.9955% ( 1) 00:21:05.108 11046.679 - 11103.916: 99.9966% ( 1) 00:21:05.108 11161.153 - 11218.390: 99.9977% ( 1) 00:21:05.108 11218.390 - 11275.626: 99.9989% ( 1) 00:21:05.108 11790.756 - 11847.993: 100.0000% ( 1) 00:21:05.108 00:21:05.108 07:31:08 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:21:05.108 00:21:05.108 real 0m2.538s 00:21:05.108 user 0m2.194s 00:21:05.108 sys 0m0.257s 00:21:05.108 07:31:08 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.108 07:31:08 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:21:05.108 ************************************ 00:21:05.108 END TEST nvme_perf 00:21:05.108 ************************************ 00:21:05.108 07:31:08 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:05.108 07:31:08 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:05.108 07:31:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.108 07:31:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.108 ************************************ 00:21:05.108 START TEST nvme_hello_world 00:21:05.108 ************************************ 00:21:05.108 07:31:08 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:05.108 Initializing NVMe Controllers 00:21:05.108 Attached to 0000:00:10.0 00:21:05.108 Namespace ID: 1 size: 5GB 00:21:05.108 Initialization complete. 00:21:05.108 INFO: using host memory buffer for IO 00:21:05.108 Hello world! 00:21:05.108 00:21:05.108 real 0m0.257s 00:21:05.108 user 0m0.072s 00:21:05.108 sys 0m0.145s 00:21:05.108 07:31:08 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.108 07:31:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:05.108 ************************************ 00:21:05.108 END TEST nvme_hello_world 00:21:05.108 ************************************ 00:21:05.108 07:31:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:05.108 07:31:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.108 07:31:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.108 07:31:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.367 ************************************ 00:21:05.367 START TEST nvme_sgl 00:21:05.367 ************************************ 00:21:05.367 07:31:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:05.367 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:21:05.367 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:21:05.367 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:21:05.367 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:21:05.367 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:21:05.367 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:21:05.627 NVMe Readv/Writev Request test 00:21:05.627 Attached to 0000:00:10.0 00:21:05.627 0000:00:10.0: build_io_request_2 test passed 00:21:05.627 0000:00:10.0: build_io_request_4 test passed 00:21:05.627 0000:00:10.0: build_io_request_5 test passed 00:21:05.627 0000:00:10.0: build_io_request_6 test passed 00:21:05.627 0000:00:10.0: build_io_request_7 test passed 00:21:05.627 0000:00:10.0: build_io_request_10 test passed 00:21:05.627 Cleaning up... 00:21:05.627 00:21:05.627 real 0m0.311s 00:21:05.627 user 0m0.133s 00:21:05.627 sys 0m0.132s 00:21:05.627 07:31:09 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.627 07:31:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:21:05.627 ************************************ 00:21:05.627 END TEST nvme_sgl 00:21:05.627 ************************************ 00:21:05.627 07:31:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:05.627 07:31:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.627 07:31:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.627 07:31:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.627 ************************************ 00:21:05.627 START TEST nvme_e2edp 00:21:05.627 ************************************ 00:21:05.627 07:31:09 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:05.887 NVMe Write/Read with End-to-End data protection test 00:21:05.887 Attached to 0000:00:10.0 00:21:05.887 Cleaning up... 00:21:05.887 00:21:05.887 real 0m0.278s 00:21:05.887 user 0m0.098s 00:21:05.887 sys 0m0.134s 00:21:05.887 07:31:09 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.887 07:31:09 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:21:05.887 ************************************ 00:21:05.887 END TEST nvme_e2edp 00:21:05.887 ************************************ 00:21:05.887 07:31:09 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:05.887 07:31:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.887 07:31:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.887 07:31:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.887 ************************************ 00:21:05.887 START TEST nvme_reserve 00:21:05.887 ************************************ 00:21:05.887 07:31:09 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:06.147 ===================================================== 00:21:06.147 NVMe Controller at PCI bus 0, device 16, function 0 00:21:06.147 ===================================================== 00:21:06.147 Reservations: Not Supported 00:21:06.147 Reservation test passed 00:21:06.147 00:21:06.147 real 0m0.264s 00:21:06.147 user 0m0.085s 00:21:06.147 sys 0m0.137s 00:21:06.147 07:31:09 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.147 07:31:09 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:21:06.147 ************************************ 00:21:06.147 END TEST nvme_reserve 00:21:06.147 ************************************ 00:21:06.147 07:31:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:06.147 07:31:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:06.147 07:31:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.147 07:31:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:06.147 ************************************ 00:21:06.147 START TEST nvme_err_injection 00:21:06.147 ************************************ 00:21:06.147 07:31:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:06.407 NVMe Error Injection test 00:21:06.407 Attached to 0000:00:10.0 00:21:06.407 0000:00:10.0: get features failed as expected 00:21:06.407 0000:00:10.0: get features successfully as expected 00:21:06.407 0000:00:10.0: read failed as expected 00:21:06.407 0000:00:10.0: read successfully as expected 00:21:06.407 Cleaning up... 00:21:06.407 00:21:06.407 real 0m0.269s 00:21:06.407 user 0m0.081s 00:21:06.407 sys 0m0.147s 00:21:06.407 07:31:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.407 07:31:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:21:06.407 ************************************ 00:21:06.407 END TEST nvme_err_injection 00:21:06.407 ************************************ 00:21:06.666 07:31:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:06.666 07:31:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:21:06.666 07:31:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.666 07:31:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:06.666 ************************************ 00:21:06.666 START TEST nvme_overhead 00:21:06.666 ************************************ 00:21:06.666 07:31:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:08.051 Initializing NVMe Controllers 00:21:08.051 Attached to 0000:00:10.0 00:21:08.051 Initialization complete. Launching workers. 00:21:08.051 submit (in ns) avg, min, max = 12533.0, 9018.3, 152683.8 00:21:08.051 complete (in ns) avg, min, max = 7528.0, 5509.2, 154687.3 00:21:08.051 00:21:08.051 Submit histogram 00:21:08.051 ================ 00:21:08.051 Range in us Cumulative Count 00:21:08.051 8.999 - 9.055: 0.0152% ( 1) 00:21:08.051 9.279 - 9.334: 0.0305% ( 1) 00:21:08.051 9.614 - 9.670: 0.0457% ( 1) 00:21:08.051 9.670 - 9.726: 0.0914% ( 3) 00:21:08.051 9.726 - 9.782: 0.1371% ( 3) 00:21:08.051 9.782 - 9.838: 0.2741% ( 9) 00:21:08.051 9.838 - 9.893: 0.4569% ( 12) 00:21:08.051 9.893 - 9.949: 0.8986% ( 29) 00:21:08.051 9.949 - 10.005: 1.4164% ( 34) 00:21:08.051 10.005 - 10.061: 1.9342% ( 34) 00:21:08.051 10.061 - 10.117: 2.9242% ( 65) 00:21:08.051 10.117 - 10.173: 3.7009% ( 51) 00:21:08.051 10.173 - 10.229: 4.6756% ( 64) 00:21:08.051 10.229 - 10.285: 5.8635% ( 78) 00:21:08.051 10.285 - 10.341: 7.2190% ( 89) 00:21:08.051 10.341 - 10.397: 8.5440% ( 87) 00:21:08.051 10.397 - 10.452: 10.0213% ( 97) 00:21:08.051 10.452 - 10.508: 11.6966% ( 110) 00:21:08.051 10.508 - 10.564: 13.3719% ( 110) 00:21:08.051 10.564 - 10.620: 14.8949% ( 100) 00:21:08.051 10.620 - 10.676: 16.4788% ( 104) 00:21:08.051 10.676 - 10.732: 18.2760% ( 118) 00:21:08.051 10.732 - 10.788: 20.0122% ( 114) 00:21:08.051 10.788 - 10.844: 21.8550% ( 121) 00:21:08.051 10.844 - 10.900: 23.6065% ( 115) 00:21:08.051 10.900 - 10.955: 25.1904% ( 104) 00:21:08.051 10.955 - 11.011: 26.9723% ( 117) 00:21:08.051 11.011 - 11.067: 28.5105% ( 101) 00:21:08.051 11.067 - 11.123: 30.3076% ( 118) 00:21:08.051 11.123 - 11.179: 32.3180% ( 132) 00:21:08.051 11.179 - 11.235: 34.2979% ( 130) 00:21:08.051 11.235 - 11.291: 35.9123% ( 106) 00:21:08.051 11.291 - 11.347: 37.4962% ( 104) 00:21:08.051 11.347 - 11.403: 39.1867% ( 111) 00:21:08.051 11.403 - 11.459: 41.2275% ( 134) 00:21:08.051 11.459 - 11.514: 43.1465% ( 126) 00:21:08.051 11.514 - 11.570: 45.2635% ( 139) 00:21:08.051 11.570 - 11.626: 46.8931% ( 107) 00:21:08.051 11.626 - 11.682: 48.9034% ( 132) 00:21:08.051 11.682 - 11.738: 50.9138% ( 132) 00:21:08.051 11.738 - 11.794: 52.6348% ( 113) 00:21:08.051 11.794 - 11.850: 54.4928% ( 122) 00:21:08.051 11.850 - 11.906: 56.2595% ( 116) 00:21:08.051 11.906 - 11.962: 58.2242% ( 129) 00:21:08.051 11.962 - 12.017: 60.0975% ( 123) 00:21:08.051 12.017 - 12.073: 62.1231% ( 133) 00:21:08.051 12.073 - 12.129: 63.7070% ( 104) 00:21:08.051 12.129 - 12.185: 65.4432% ( 114) 00:21:08.051 12.185 - 12.241: 67.0880% ( 108) 00:21:08.051 12.241 - 12.297: 68.6567% ( 103) 00:21:08.051 12.297 - 12.353: 69.6162% ( 63) 00:21:08.051 12.353 - 12.409: 71.1392% ( 100) 00:21:08.051 12.409 - 12.465: 72.5860% ( 95) 00:21:08.051 12.465 - 12.521: 73.8349% ( 82) 00:21:08.051 12.521 - 12.576: 74.9924% ( 76) 00:21:08.051 12.576 - 12.632: 76.3479% ( 89) 00:21:08.051 12.632 - 12.688: 77.3835% ( 68) 00:21:08.051 12.688 - 12.744: 78.4800% ( 72) 00:21:08.051 12.744 - 12.800: 79.5157% ( 68) 00:21:08.051 12.800 - 12.856: 80.5361% ( 67) 00:21:08.051 12.856 - 12.912: 81.5413% ( 66) 00:21:08.051 12.912 - 12.968: 82.2571% ( 47) 00:21:08.051 12.968 - 13.024: 82.8967% ( 42) 00:21:08.051 13.024 - 13.079: 83.5669% ( 44) 00:21:08.051 13.079 - 13.135: 84.3284% ( 50) 00:21:08.051 13.135 - 13.191: 84.8462% ( 34) 00:21:08.051 13.191 - 13.247: 85.5772% ( 48) 00:21:08.051 13.247 - 13.303: 86.0341% ( 30) 00:21:08.051 13.303 - 13.359: 86.3844% ( 23) 00:21:08.051 13.359 - 13.415: 86.8870% ( 33) 00:21:08.051 13.415 - 13.471: 87.3744% ( 32) 00:21:08.051 13.471 - 13.527: 87.7856% ( 27) 00:21:08.051 13.527 - 13.583: 88.1663% ( 25) 00:21:08.051 13.583 - 13.638: 88.5471% ( 25) 00:21:08.051 13.638 - 13.694: 88.8517% ( 20) 00:21:08.051 13.694 - 13.750: 89.1410% ( 19) 00:21:08.051 13.750 - 13.806: 89.5218% ( 25) 00:21:08.051 13.806 - 13.862: 89.7807% ( 17) 00:21:08.051 13.862 - 13.918: 90.0701% ( 19) 00:21:08.051 13.918 - 13.974: 90.2528% ( 12) 00:21:08.051 13.974 - 14.030: 90.5574% ( 20) 00:21:08.051 14.030 - 14.086: 90.7097% ( 10) 00:21:08.051 14.086 - 14.141: 90.9229% ( 14) 00:21:08.051 14.141 - 14.197: 91.0752% ( 10) 00:21:08.051 14.197 - 14.253: 91.1666% ( 6) 00:21:08.051 14.253 - 14.309: 91.2275% ( 4) 00:21:08.051 14.309 - 14.421: 91.4103% ( 12) 00:21:08.051 14.421 - 14.533: 91.5474% ( 9) 00:21:08.051 14.533 - 14.645: 91.6387% ( 6) 00:21:08.051 14.645 - 14.756: 91.7910% ( 10) 00:21:08.051 14.756 - 14.868: 91.9586% ( 11) 00:21:08.051 14.868 - 14.980: 92.2023% ( 16) 00:21:08.051 14.980 - 15.092: 92.2479% ( 3) 00:21:08.051 15.092 - 15.203: 92.4612% ( 14) 00:21:08.051 15.203 - 15.315: 92.6135% ( 10) 00:21:08.051 15.315 - 15.427: 92.7658% ( 10) 00:21:08.051 15.427 - 15.539: 92.8876% ( 8) 00:21:08.051 15.539 - 15.651: 92.9485% ( 4) 00:21:08.051 15.651 - 15.762: 93.0856% ( 9) 00:21:08.051 15.762 - 15.874: 93.1313% ( 3) 00:21:08.051 15.874 - 15.986: 93.2684% ( 9) 00:21:08.051 15.986 - 16.098: 93.4359% ( 11) 00:21:08.051 16.098 - 16.210: 93.5730% ( 9) 00:21:08.051 16.210 - 16.321: 93.7100% ( 9) 00:21:08.051 16.321 - 16.433: 93.7862% ( 5) 00:21:08.051 16.433 - 16.545: 93.9842% ( 13) 00:21:08.051 16.545 - 16.657: 94.0603% ( 5) 00:21:08.051 16.657 - 16.769: 94.2888% ( 15) 00:21:08.051 16.769 - 16.880: 94.4411% ( 10) 00:21:08.051 16.880 - 16.992: 94.7152% ( 18) 00:21:08.051 16.992 - 17.104: 94.8066% ( 6) 00:21:08.051 17.104 - 17.216: 94.8980% ( 6) 00:21:08.051 17.216 - 17.328: 95.0655% ( 11) 00:21:08.051 17.328 - 17.439: 95.2178% ( 10) 00:21:08.051 17.439 - 17.551: 95.4158% ( 13) 00:21:08.051 17.551 - 17.663: 95.5376% ( 8) 00:21:08.051 17.663 - 17.775: 95.6138% ( 5) 00:21:08.051 17.775 - 17.886: 95.6595% ( 3) 00:21:08.051 17.886 - 17.998: 95.7508% ( 6) 00:21:08.051 17.998 - 18.110: 95.8270% ( 5) 00:21:08.051 18.110 - 18.222: 95.9336% ( 7) 00:21:08.051 18.222 - 18.334: 95.9488% ( 1) 00:21:08.051 18.334 - 18.445: 95.9641% ( 1) 00:21:08.051 18.445 - 18.557: 96.0097% ( 3) 00:21:08.051 18.557 - 18.669: 96.0554% ( 3) 00:21:08.051 18.669 - 18.781: 96.0707% ( 1) 00:21:08.051 18.781 - 18.893: 96.1164% ( 3) 00:21:08.051 18.893 - 19.004: 96.1773% ( 4) 00:21:08.051 19.004 - 19.116: 96.2077% ( 2) 00:21:08.051 19.116 - 19.228: 96.2382% ( 2) 00:21:08.051 19.228 - 19.340: 96.2991% ( 4) 00:21:08.051 19.340 - 19.452: 96.3296% ( 2) 00:21:08.051 19.452 - 19.563: 96.3600% ( 2) 00:21:08.051 19.563 - 19.675: 96.3753% ( 1) 00:21:08.051 19.675 - 19.787: 96.3905% ( 1) 00:21:08.051 19.787 - 19.899: 96.4057% ( 1) 00:21:08.051 20.122 - 20.234: 96.4362% ( 2) 00:21:08.051 20.234 - 20.346: 96.4514% ( 1) 00:21:08.051 20.346 - 20.458: 96.4971% ( 3) 00:21:08.051 20.458 - 20.569: 96.5123% ( 1) 00:21:08.051 20.569 - 20.681: 96.5428% ( 2) 00:21:08.051 20.681 - 20.793: 96.5733% ( 2) 00:21:08.051 20.793 - 20.905: 96.5885% ( 1) 00:21:08.051 20.905 - 21.017: 96.6189% ( 2) 00:21:08.051 21.352 - 21.464: 96.6646% ( 3) 00:21:08.051 21.464 - 21.576: 96.6951% ( 2) 00:21:08.051 21.576 - 21.687: 96.7103% ( 1) 00:21:08.051 21.687 - 21.799: 96.7560% ( 3) 00:21:08.051 21.799 - 21.911: 96.7712% ( 1) 00:21:08.051 21.911 - 22.023: 96.7865% ( 1) 00:21:08.051 22.023 - 22.134: 96.8169% ( 2) 00:21:08.051 22.134 - 22.246: 96.8322% ( 1) 00:21:08.051 22.246 - 22.358: 96.8779% ( 3) 00:21:08.051 22.358 - 22.470: 96.9388% ( 4) 00:21:08.051 22.470 - 22.582: 96.9540% ( 1) 00:21:08.051 22.582 - 22.693: 96.9692% ( 1) 00:21:08.051 22.693 - 22.805: 96.9997% ( 2) 00:21:08.051 22.805 - 22.917: 97.0758% ( 5) 00:21:08.051 22.917 - 23.029: 97.1063% ( 2) 00:21:08.051 23.029 - 23.141: 97.1977% ( 6) 00:21:08.051 23.141 - 23.252: 97.2434% ( 3) 00:21:08.051 23.252 - 23.364: 97.3348% ( 6) 00:21:08.052 23.364 - 23.476: 97.4261% ( 6) 00:21:08.052 23.476 - 23.588: 97.5327% ( 7) 00:21:08.052 23.588 - 23.700: 97.5937% ( 4) 00:21:08.052 23.700 - 23.811: 97.6546% ( 4) 00:21:08.052 23.811 - 23.923: 97.7764% ( 8) 00:21:08.052 23.923 - 24.035: 97.8069% ( 2) 00:21:08.052 24.035 - 24.147: 97.8526% ( 3) 00:21:08.052 24.147 - 24.259: 97.8983% ( 3) 00:21:08.052 24.259 - 24.370: 97.9440% ( 3) 00:21:08.052 24.370 - 24.482: 97.9896% ( 3) 00:21:08.052 24.482 - 24.594: 98.0506% ( 4) 00:21:08.052 24.594 - 24.706: 98.0963% ( 3) 00:21:08.052 24.706 - 24.817: 98.1419% ( 3) 00:21:08.052 24.817 - 24.929: 98.1724% ( 2) 00:21:08.052 24.929 - 25.041: 98.2181% ( 3) 00:21:08.052 25.041 - 25.153: 98.2486% ( 2) 00:21:08.052 25.153 - 25.265: 98.2942% ( 3) 00:21:08.052 25.265 - 25.376: 98.3704% ( 5) 00:21:08.052 25.488 - 25.600: 98.3856% ( 1) 00:21:08.052 25.712 - 25.824: 98.4009% ( 1) 00:21:08.052 25.935 - 26.047: 98.4313% ( 2) 00:21:08.052 26.271 - 26.383: 98.4465% ( 1) 00:21:08.052 26.383 - 26.494: 98.4770% ( 2) 00:21:08.052 26.494 - 26.606: 98.4922% ( 1) 00:21:08.052 26.606 - 26.718: 98.5227% ( 2) 00:21:08.052 26.830 - 26.941: 98.5684% ( 3) 00:21:08.052 27.165 - 27.277: 98.6141% ( 3) 00:21:08.052 27.277 - 27.389: 98.6293% ( 1) 00:21:08.052 27.389 - 27.500: 98.6445% ( 1) 00:21:08.052 27.612 - 27.724: 98.6750% ( 2) 00:21:08.052 27.836 - 27.948: 98.6902% ( 1) 00:21:08.052 28.283 - 28.395: 98.7055% ( 1) 00:21:08.052 28.395 - 28.507: 98.7359% ( 2) 00:21:08.052 28.618 - 28.842: 98.7664% ( 2) 00:21:08.052 28.842 - 29.066: 98.7968% ( 2) 00:21:08.052 29.513 - 29.736: 98.8273% ( 2) 00:21:08.052 29.736 - 29.960: 98.8578% ( 2) 00:21:08.052 30.854 - 31.078: 98.8730% ( 1) 00:21:08.052 31.301 - 31.525: 98.9034% ( 2) 00:21:08.052 35.102 - 35.326: 98.9339% ( 2) 00:21:08.052 35.326 - 35.549: 98.9948% ( 4) 00:21:08.052 35.549 - 35.773: 99.0253% ( 2) 00:21:08.052 35.773 - 35.997: 99.1319% ( 7) 00:21:08.052 35.997 - 36.220: 99.1471% ( 1) 00:21:08.052 36.220 - 36.444: 99.1776% ( 2) 00:21:08.052 36.444 - 36.667: 99.2080% ( 2) 00:21:08.052 36.667 - 36.891: 99.2842% ( 5) 00:21:08.052 36.891 - 37.114: 99.3147% ( 2) 00:21:08.052 37.114 - 37.338: 99.4060% ( 6) 00:21:08.052 37.338 - 37.562: 99.4213% ( 1) 00:21:08.052 37.562 - 37.785: 99.4822% ( 4) 00:21:08.052 37.785 - 38.009: 99.5279% ( 3) 00:21:08.052 38.009 - 38.232: 99.5888% ( 4) 00:21:08.052 38.232 - 38.456: 99.6497% ( 4) 00:21:08.052 38.679 - 38.903: 99.6649% ( 1) 00:21:08.052 38.903 - 39.127: 99.6802% ( 1) 00:21:08.052 39.127 - 39.350: 99.7106% ( 2) 00:21:08.052 39.350 - 39.574: 99.7259% ( 1) 00:21:08.052 39.574 - 39.797: 99.7411% ( 1) 00:21:08.052 40.021 - 40.245: 99.7563% ( 1) 00:21:08.052 40.468 - 40.692: 99.7868% ( 2) 00:21:08.052 41.586 - 41.810: 99.8020% ( 1) 00:21:08.052 42.033 - 42.257: 99.8172% ( 1) 00:21:08.052 43.151 - 43.375: 99.8325% ( 1) 00:21:08.052 46.505 - 46.728: 99.8477% ( 1) 00:21:08.052 46.728 - 46.952: 99.8629% ( 1) 00:21:08.052 46.952 - 47.176: 99.8934% ( 2) 00:21:08.052 54.777 - 55.001: 99.9086% ( 1) 00:21:08.052 56.119 - 56.342: 99.9239% ( 1) 00:21:08.052 56.566 - 56.790: 99.9391% ( 1) 00:21:08.052 57.684 - 58.131: 99.9543% ( 1) 00:21:08.052 63.944 - 64.391: 99.9695% ( 1) 00:21:08.052 76.017 - 76.465: 99.9848% ( 1) 00:21:08.052 152.035 - 152.929: 100.0000% ( 1) 00:21:08.052 00:21:08.052 Complete histogram 00:21:08.052 ================== 00:21:08.052 Range in us Cumulative Count 00:21:08.052 5.506 - 5.534: 0.0152% ( 1) 00:21:08.052 5.534 - 5.562: 0.0457% ( 2) 00:21:08.052 5.562 - 5.590: 0.1523% ( 7) 00:21:08.052 5.590 - 5.617: 0.2284% ( 5) 00:21:08.052 5.617 - 5.645: 0.2894% ( 4) 00:21:08.052 5.645 - 5.673: 0.4264% ( 9) 00:21:08.052 5.673 - 5.701: 0.5787% ( 10) 00:21:08.052 5.701 - 5.729: 0.7310% ( 10) 00:21:08.052 5.729 - 5.757: 0.8986% ( 11) 00:21:08.052 5.757 - 5.785: 1.0661% ( 11) 00:21:08.052 5.785 - 5.813: 1.2489% ( 12) 00:21:08.052 5.813 - 5.841: 1.5078% ( 17) 00:21:08.052 5.841 - 5.869: 1.8733% ( 24) 00:21:08.052 5.869 - 5.897: 2.4368% ( 37) 00:21:08.052 5.897 - 5.925: 3.0155% ( 38) 00:21:08.052 5.925 - 5.953: 3.5029% ( 32) 00:21:08.052 5.953 - 5.981: 4.3405% ( 55) 00:21:08.052 5.981 - 6.009: 5.1477% ( 53) 00:21:08.052 6.009 - 6.037: 6.4575% ( 86) 00:21:08.052 6.037 - 6.065: 7.5693% ( 73) 00:21:08.052 6.065 - 6.093: 8.9400% ( 90) 00:21:08.052 6.093 - 6.121: 10.5087% ( 103) 00:21:08.052 6.121 - 6.148: 11.7271% ( 80) 00:21:08.052 6.148 - 6.176: 13.2501% ( 100) 00:21:08.052 6.176 - 6.204: 14.8949% ( 108) 00:21:08.052 6.204 - 6.232: 16.4179% ( 100) 00:21:08.052 6.232 - 6.260: 17.9561% ( 101) 00:21:08.052 6.260 - 6.288: 19.3725% ( 93) 00:21:08.052 6.288 - 6.316: 21.0174% ( 108) 00:21:08.052 6.316 - 6.344: 22.6774% ( 109) 00:21:08.052 6.344 - 6.372: 24.4289% ( 115) 00:21:08.052 6.372 - 6.400: 26.1956% ( 116) 00:21:08.052 6.400 - 6.428: 27.9165% ( 113) 00:21:08.052 6.428 - 6.456: 29.4548% ( 101) 00:21:08.052 6.456 - 6.484: 31.4499% ( 131) 00:21:08.052 6.484 - 6.512: 33.1861% ( 114) 00:21:08.052 6.512 - 6.540: 35.0899% ( 125) 00:21:08.052 6.540 - 6.568: 37.4657% ( 156) 00:21:08.052 6.568 - 6.596: 39.4456% ( 130) 00:21:08.052 6.596 - 6.624: 41.5626% ( 139) 00:21:08.052 6.624 - 6.652: 43.6186% ( 135) 00:21:08.052 6.652 - 6.679: 45.5681% ( 128) 00:21:08.052 6.679 - 6.707: 47.6089% ( 134) 00:21:08.052 6.707 - 6.735: 49.5583% ( 128) 00:21:08.052 6.735 - 6.763: 51.3250% ( 116) 00:21:08.052 6.763 - 6.791: 53.2135% ( 124) 00:21:08.052 6.791 - 6.819: 55.0868% ( 123) 00:21:08.052 6.819 - 6.847: 56.7469% ( 109) 00:21:08.052 6.847 - 6.875: 58.3308% ( 104) 00:21:08.052 6.875 - 6.903: 60.1432% ( 119) 00:21:08.052 6.903 - 6.931: 61.5900% ( 95) 00:21:08.052 6.931 - 6.959: 63.0825% ( 98) 00:21:08.052 6.959 - 6.987: 64.6055% ( 100) 00:21:08.052 6.987 - 7.015: 65.9001% ( 85) 00:21:08.052 7.015 - 7.043: 67.2860% ( 91) 00:21:08.052 7.043 - 7.071: 68.3826% ( 72) 00:21:08.052 7.071 - 7.099: 69.6010% ( 80) 00:21:08.052 7.099 - 7.127: 70.6062% ( 66) 00:21:08.052 7.127 - 7.155: 71.8093% ( 79) 00:21:08.052 7.155 - 7.210: 73.8654% ( 135) 00:21:08.052 7.210 - 7.266: 75.3731% ( 99) 00:21:08.052 7.266 - 7.322: 76.8200% ( 95) 00:21:08.052 7.322 - 7.378: 78.2516% ( 94) 00:21:08.052 7.378 - 7.434: 79.2111% ( 63) 00:21:08.052 7.434 - 7.490: 80.1097% ( 59) 00:21:08.052 7.490 - 7.546: 80.9625% ( 56) 00:21:08.052 7.546 - 7.602: 81.6631% ( 46) 00:21:08.052 7.602 - 7.658: 82.2114% ( 36) 00:21:08.052 7.658 - 7.714: 82.6074% ( 26) 00:21:08.052 7.714 - 7.769: 83.2927% ( 45) 00:21:08.052 7.769 - 7.825: 83.7344% ( 29) 00:21:08.052 7.825 - 7.881: 84.2827% ( 36) 00:21:08.052 7.881 - 7.937: 84.6482% ( 24) 00:21:08.052 7.937 - 7.993: 84.9985% ( 23) 00:21:08.052 7.993 - 8.049: 85.2574% ( 17) 00:21:08.052 8.049 - 8.105: 85.6381% ( 25) 00:21:08.052 8.105 - 8.161: 86.0646% ( 28) 00:21:08.052 8.161 - 8.217: 86.7804% ( 47) 00:21:08.052 8.217 - 8.272: 87.3134% ( 35) 00:21:08.052 8.272 - 8.328: 88.0902% ( 51) 00:21:08.052 8.328 - 8.384: 88.7907% ( 46) 00:21:08.052 8.384 - 8.440: 89.4609% ( 44) 00:21:08.052 8.440 - 8.496: 89.8873% ( 28) 00:21:08.052 8.496 - 8.552: 90.1462% ( 17) 00:21:08.052 8.552 - 8.608: 90.4813% ( 22) 00:21:08.052 8.608 - 8.664: 90.7554% ( 18) 00:21:08.052 8.664 - 8.720: 90.9686% ( 14) 00:21:08.052 8.720 - 8.776: 91.3189% ( 23) 00:21:08.052 8.776 - 8.831: 91.5017% ( 12) 00:21:08.052 8.831 - 8.887: 91.8672% ( 24) 00:21:08.052 8.887 - 8.943: 92.0956% ( 15) 00:21:08.052 8.943 - 8.999: 92.2175% ( 8) 00:21:08.052 8.999 - 9.055: 92.4459% ( 15) 00:21:08.052 9.055 - 9.111: 92.5525% ( 7) 00:21:08.052 9.111 - 9.167: 92.6896% ( 9) 00:21:08.052 9.167 - 9.223: 92.8115% ( 8) 00:21:08.052 9.223 - 9.279: 92.9181% ( 7) 00:21:08.052 9.279 - 9.334: 93.0399% ( 8) 00:21:08.052 9.334 - 9.390: 93.1008% ( 4) 00:21:08.052 9.390 - 9.446: 93.2379% ( 9) 00:21:08.052 9.446 - 9.502: 93.2684% ( 2) 00:21:08.052 9.502 - 9.558: 93.3140% ( 3) 00:21:08.052 9.558 - 9.614: 93.3750% ( 4) 00:21:08.052 9.614 - 9.670: 93.4511% ( 5) 00:21:08.052 9.670 - 9.726: 93.5425% ( 6) 00:21:08.052 9.726 - 9.782: 93.6339% ( 6) 00:21:08.052 9.782 - 9.838: 93.7100% ( 5) 00:21:08.053 9.838 - 9.893: 93.7557% ( 3) 00:21:08.053 9.893 - 9.949: 93.8166% ( 4) 00:21:08.053 9.949 - 10.005: 93.8471% ( 2) 00:21:08.053 10.005 - 10.061: 93.9385% ( 6) 00:21:08.053 10.061 - 10.117: 93.9994% ( 4) 00:21:08.053 10.173 - 10.229: 94.0755% ( 5) 00:21:08.053 10.229 - 10.285: 94.1060% ( 2) 00:21:08.053 10.285 - 10.341: 94.1212% ( 1) 00:21:08.053 10.341 - 10.397: 94.2126% ( 6) 00:21:08.053 10.452 - 10.508: 94.2583% ( 3) 00:21:08.053 10.508 - 10.564: 94.2888% ( 2) 00:21:08.053 10.564 - 10.620: 94.3801% ( 6) 00:21:08.053 10.620 - 10.676: 94.4258% ( 3) 00:21:08.053 10.676 - 10.732: 94.5172% ( 6) 00:21:08.053 10.732 - 10.788: 94.5934% ( 5) 00:21:08.053 10.788 - 10.844: 94.6847% ( 6) 00:21:08.053 10.844 - 10.900: 94.7761% ( 6) 00:21:08.053 10.900 - 10.955: 94.8370% ( 4) 00:21:08.053 10.955 - 11.011: 94.8523% ( 1) 00:21:08.053 11.011 - 11.067: 94.9132% ( 4) 00:21:08.053 11.067 - 11.123: 94.9741% ( 4) 00:21:08.053 11.123 - 11.179: 94.9893% ( 1) 00:21:08.053 11.179 - 11.235: 95.0350% ( 3) 00:21:08.053 11.235 - 11.291: 95.1264% ( 6) 00:21:08.053 11.291 - 11.347: 95.1569% ( 2) 00:21:08.053 11.347 - 11.403: 95.1721% ( 1) 00:21:08.053 11.403 - 11.459: 95.2178% ( 3) 00:21:08.053 11.459 - 11.514: 95.2635% ( 3) 00:21:08.053 11.514 - 11.570: 95.3244% ( 4) 00:21:08.053 11.570 - 11.626: 95.3549% ( 2) 00:21:08.053 11.626 - 11.682: 95.4310% ( 5) 00:21:08.053 11.682 - 11.738: 95.4615% ( 2) 00:21:08.053 11.738 - 11.794: 95.5224% ( 4) 00:21:08.053 11.794 - 11.850: 95.5528% ( 2) 00:21:08.053 11.962 - 12.017: 95.5833% ( 2) 00:21:08.053 12.129 - 12.185: 95.5985% ( 1) 00:21:08.053 12.185 - 12.241: 95.6442% ( 3) 00:21:08.053 12.241 - 12.297: 95.6747% ( 2) 00:21:08.053 12.297 - 12.353: 95.6899% ( 1) 00:21:08.053 12.353 - 12.409: 95.7204% ( 2) 00:21:08.053 12.576 - 12.632: 95.7356% ( 1) 00:21:08.053 12.632 - 12.688: 95.7661% ( 2) 00:21:08.053 12.688 - 12.744: 95.8270% ( 4) 00:21:08.053 12.744 - 12.800: 95.8422% ( 1) 00:21:08.053 12.856 - 12.912: 95.8879% ( 3) 00:21:08.053 13.024 - 13.079: 95.9031% ( 1) 00:21:08.053 13.135 - 13.191: 95.9336% ( 2) 00:21:08.053 13.191 - 13.247: 95.9488% ( 1) 00:21:08.053 13.247 - 13.303: 95.9641% ( 1) 00:21:08.053 13.303 - 13.359: 96.0097% ( 3) 00:21:08.053 13.415 - 13.471: 96.0250% ( 1) 00:21:08.053 13.527 - 13.583: 96.0402% ( 1) 00:21:08.053 13.583 - 13.638: 96.0707% ( 2) 00:21:08.053 13.638 - 13.694: 96.1011% ( 2) 00:21:08.053 13.694 - 13.750: 96.1164% ( 1) 00:21:08.053 13.750 - 13.806: 96.1316% ( 1) 00:21:08.053 13.862 - 13.918: 96.1620% ( 2) 00:21:08.053 13.918 - 13.974: 96.1925% ( 2) 00:21:08.053 14.197 - 14.253: 96.2077% ( 1) 00:21:08.053 14.253 - 14.309: 96.2382% ( 2) 00:21:08.053 14.309 - 14.421: 96.2534% ( 1) 00:21:08.053 14.421 - 14.533: 96.3143% ( 4) 00:21:08.053 14.533 - 14.645: 96.4210% ( 7) 00:21:08.053 14.645 - 14.756: 96.4971% ( 5) 00:21:08.053 14.756 - 14.868: 96.5276% ( 2) 00:21:08.053 15.092 - 15.203: 96.5428% ( 1) 00:21:08.053 15.203 - 15.315: 96.5885% ( 3) 00:21:08.053 15.315 - 15.427: 96.6037% ( 1) 00:21:08.053 15.762 - 15.874: 96.6799% ( 5) 00:21:08.053 15.874 - 15.986: 96.7256% ( 3) 00:21:08.053 15.986 - 16.098: 96.7712% ( 3) 00:21:08.053 16.098 - 16.210: 96.8169% ( 3) 00:21:08.053 16.210 - 16.321: 96.8474% ( 2) 00:21:08.053 16.321 - 16.433: 96.9083% ( 4) 00:21:08.053 16.433 - 16.545: 96.9388% ( 2) 00:21:08.053 16.545 - 16.657: 96.9997% ( 4) 00:21:08.053 16.657 - 16.769: 97.0149% ( 1) 00:21:08.053 16.769 - 16.880: 97.0606% ( 3) 00:21:08.053 16.880 - 16.992: 97.1063% ( 3) 00:21:08.053 16.992 - 17.104: 97.1672% ( 4) 00:21:08.053 17.104 - 17.216: 97.2281% ( 4) 00:21:08.053 17.216 - 17.328: 97.2434% ( 1) 00:21:08.053 17.328 - 17.439: 97.2738% ( 2) 00:21:08.053 17.439 - 17.551: 97.2891% ( 1) 00:21:08.053 17.551 - 17.663: 97.3043% ( 1) 00:21:08.053 17.663 - 17.775: 97.3804% ( 5) 00:21:08.053 17.775 - 17.886: 97.3957% ( 1) 00:21:08.053 17.886 - 17.998: 97.4718% ( 5) 00:21:08.053 17.998 - 18.110: 97.5327% ( 4) 00:21:08.053 18.110 - 18.222: 97.6546% ( 8) 00:21:08.053 18.222 - 18.334: 97.7003% ( 3) 00:21:08.053 18.334 - 18.445: 97.7764% ( 5) 00:21:08.053 18.557 - 18.669: 97.8069% ( 2) 00:21:08.053 18.669 - 18.781: 97.8830% ( 5) 00:21:08.053 18.781 - 18.893: 97.9592% ( 5) 00:21:08.053 18.893 - 19.004: 98.0201% ( 4) 00:21:08.053 19.004 - 19.116: 98.0506% ( 2) 00:21:08.053 19.116 - 19.228: 98.0658% ( 1) 00:21:08.053 19.228 - 19.340: 98.0963% ( 2) 00:21:08.053 19.340 - 19.452: 98.1419% ( 3) 00:21:08.053 19.675 - 19.787: 98.1572% ( 1) 00:21:08.053 19.899 - 20.010: 98.1724% ( 1) 00:21:08.053 20.010 - 20.122: 98.2181% ( 3) 00:21:08.053 20.234 - 20.346: 98.2333% ( 1) 00:21:08.053 20.346 - 20.458: 98.2486% ( 1) 00:21:08.053 20.569 - 20.681: 98.3247% ( 5) 00:21:08.053 20.681 - 20.793: 98.3552% ( 2) 00:21:08.053 20.793 - 20.905: 98.3856% ( 2) 00:21:08.053 20.905 - 21.017: 98.4161% ( 2) 00:21:08.053 21.017 - 21.128: 98.4313% ( 1) 00:21:08.053 21.687 - 21.799: 98.4618% ( 2) 00:21:08.053 22.134 - 22.246: 98.4770% ( 1) 00:21:08.053 22.358 - 22.470: 98.5227% ( 3) 00:21:08.053 22.582 - 22.693: 98.5684% ( 3) 00:21:08.053 22.693 - 22.805: 98.5836% ( 1) 00:21:08.053 22.805 - 22.917: 98.5988% ( 1) 00:21:08.053 22.917 - 23.029: 98.6902% ( 6) 00:21:08.053 23.029 - 23.141: 98.7816% ( 6) 00:21:08.053 23.252 - 23.364: 98.8273% ( 3) 00:21:08.053 23.364 - 23.476: 98.8730% ( 3) 00:21:08.053 23.476 - 23.588: 98.9491% ( 5) 00:21:08.053 23.588 - 23.700: 98.9796% ( 2) 00:21:08.053 23.700 - 23.811: 99.0101% ( 2) 00:21:08.053 23.811 - 23.923: 99.0253% ( 1) 00:21:08.053 23.923 - 24.035: 99.0405% ( 1) 00:21:08.053 24.035 - 24.147: 99.0862% ( 3) 00:21:08.053 24.259 - 24.370: 99.1014% ( 1) 00:21:08.053 24.370 - 24.482: 99.1319% ( 2) 00:21:08.053 24.482 - 24.594: 99.1776% ( 3) 00:21:08.053 24.594 - 24.706: 99.1928% ( 1) 00:21:08.053 24.706 - 24.817: 99.2080% ( 1) 00:21:08.053 24.817 - 24.929: 99.2385% ( 2) 00:21:08.053 24.929 - 25.041: 99.2537% ( 1) 00:21:08.053 25.041 - 25.153: 99.2994% ( 3) 00:21:08.053 25.153 - 25.265: 99.3147% ( 1) 00:21:08.053 25.265 - 25.376: 99.3451% ( 2) 00:21:08.053 25.376 - 25.488: 99.3603% ( 1) 00:21:08.053 25.488 - 25.600: 99.3756% ( 1) 00:21:08.053 25.600 - 25.712: 99.4670% ( 6) 00:21:08.053 25.712 - 25.824: 99.4974% ( 2) 00:21:08.053 25.824 - 25.935: 99.5279% ( 2) 00:21:08.053 25.935 - 26.047: 99.5736% ( 3) 00:21:08.053 26.047 - 26.159: 99.6040% ( 2) 00:21:08.053 26.159 - 26.271: 99.6193% ( 1) 00:21:08.053 26.271 - 26.383: 99.6345% ( 1) 00:21:08.053 26.606 - 26.718: 99.6497% ( 1) 00:21:08.053 26.718 - 26.830: 99.6802% ( 2) 00:21:08.053 26.830 - 26.941: 99.6954% ( 1) 00:21:08.053 27.165 - 27.277: 99.7106% ( 1) 00:21:08.053 27.389 - 27.500: 99.7259% ( 1) 00:21:08.053 27.724 - 27.836: 99.7411% ( 1) 00:21:08.053 27.836 - 27.948: 99.7868% ( 3) 00:21:08.053 28.507 - 28.618: 99.8020% ( 1) 00:21:08.053 31.972 - 32.196: 99.8172% ( 1) 00:21:08.053 32.866 - 33.090: 99.8477% ( 2) 00:21:08.053 33.090 - 33.314: 99.8629% ( 1) 00:21:08.053 35.326 - 35.549: 99.8782% ( 1) 00:21:08.053 35.997 - 36.220: 99.8934% ( 1) 00:21:08.053 43.822 - 44.045: 99.9086% ( 1) 00:21:08.053 44.269 - 44.493: 99.9239% ( 1) 00:21:08.053 44.493 - 44.716: 99.9391% ( 1) 00:21:08.053 52.318 - 52.541: 99.9543% ( 1) 00:21:08.053 61.261 - 61.708: 99.9695% ( 1) 00:21:08.053 65.286 - 65.733: 99.9848% ( 1) 00:21:08.053 153.824 - 154.718: 100.0000% ( 1) 00:21:08.053 00:21:08.053 00:21:08.053 real 0m1.274s 00:21:08.053 user 0m1.096s 00:21:08.053 sys 0m0.135s 00:21:08.053 07:31:11 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.053 07:31:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:21:08.053 ************************************ 00:21:08.053 END TEST nvme_overhead 00:21:08.053 ************************************ 00:21:08.053 07:31:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:08.053 07:31:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:21:08.053 07:31:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.053 07:31:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:08.053 ************************************ 00:21:08.053 START TEST nvme_arbitration 00:21:08.053 ************************************ 00:21:08.053 07:31:11 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:11.355 Initializing NVMe Controllers 00:21:11.355 Attached to 0000:00:10.0 00:21:11.355 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:21:11.355 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:21:11.355 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:21:11.355 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:21:11.355 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:21:11.355 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:21:11.355 Initialization complete. Launching workers. 00:21:11.355 Starting thread on core 1 with urgent priority queue 00:21:11.355 Starting thread on core 2 with urgent priority queue 00:21:11.355 Starting thread on core 3 with urgent priority queue 00:21:11.355 Starting thread on core 0 with urgent priority queue 00:21:11.355 QEMU NVMe Ctrl (12340 ) core 0: 1130.67 IO/s 88.44 secs/100000 ios 00:21:11.356 QEMU NVMe Ctrl (12340 ) core 1: 1109.33 IO/s 90.14 secs/100000 ios 00:21:11.356 QEMU NVMe Ctrl (12340 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:21:11.356 QEMU NVMe Ctrl (12340 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:21:11.356 ======================================================== 00:21:11.356 00:21:11.356 00:21:11.356 real 0m3.335s 00:21:11.356 user 0m9.271s 00:21:11.356 sys 0m0.144s 00:21:11.356 07:31:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.356 07:31:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:21:11.356 ************************************ 00:21:11.356 END TEST nvme_arbitration 00:21:11.356 ************************************ 00:21:11.356 07:31:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:11.356 07:31:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:11.356 07:31:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.356 07:31:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.356 ************************************ 00:21:11.356 START TEST nvme_single_aen 00:21:11.356 ************************************ 00:21:11.356 07:31:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:11.624 Asynchronous Event Request test 00:21:11.624 Attached to 0000:00:10.0 00:21:11.624 Reset controller to setup AER completions for this process 00:21:11.624 Registering asynchronous event callbacks... 00:21:11.624 Getting orig temperature thresholds of all controllers 00:21:11.624 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:11.624 Setting all controllers temperature threshold low to trigger AER 00:21:11.624 Waiting for all controllers temperature threshold to be set lower 00:21:11.624 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:11.624 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:11.624 Waiting for all controllers to trigger AER and reset threshold 00:21:11.624 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.624 Cleaning up... 00:21:11.624 00:21:11.624 real 0m0.268s 00:21:11.624 user 0m0.101s 00:21:11.624 sys 0m0.123s 00:21:11.624 07:31:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.624 07:31:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:21:11.624 ************************************ 00:21:11.624 END TEST nvme_single_aen 00:21:11.624 ************************************ 00:21:11.624 07:31:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:21:11.624 07:31:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.624 07:31:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.624 07:31:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.624 ************************************ 00:21:11.624 START TEST nvme_doorbell_aers 00:21:11.624 ************************************ 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:11.624 07:31:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:11.884 [2024-11-20 07:31:15.796592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 79994) is not found. Dropping the request. 00:21:21.873 Executing: test_write_invalid_db 00:21:21.873 Waiting for AER completion... 00:21:21.873 Failure: test_write_invalid_db 00:21:21.873 00:21:21.873 Executing: test_invalid_db_write_overflow_sq 00:21:21.873 Waiting for AER completion... 00:21:21.873 Failure: test_invalid_db_write_overflow_sq 00:21:21.873 00:21:21.873 Executing: test_invalid_db_write_overflow_cq 00:21:21.873 Waiting for AER completion... 00:21:21.873 Failure: test_invalid_db_write_overflow_cq 00:21:21.873 00:21:21.873 00:21:21.873 real 0m10.130s 00:21:21.873 user 0m9.042s 00:21:21.873 sys 0m1.049s 00:21:21.873 07:31:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.873 07:31:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:21:21.873 ************************************ 00:21:21.873 END TEST nvme_doorbell_aers 00:21:21.873 ************************************ 00:21:21.873 07:31:25 nvme -- nvme/nvme.sh@97 -- # uname 00:21:21.873 07:31:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:21:21.874 07:31:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:21:21.874 07:31:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:21:21.874 07:31:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.874 07:31:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:21.874 ************************************ 00:21:21.874 START TEST nvme_multi_aen 00:21:21.874 ************************************ 00:21:21.874 07:31:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:21:22.134 [2024-11-20 07:31:25.870847] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 79994) is not found. Dropping the request. 00:21:22.134 [2024-11-20 07:31:25.870941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 79994) is not found. Dropping the request. 00:21:22.134 [2024-11-20 07:31:25.870963] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 79994) is not found. Dropping the request. 00:21:22.134 Child process pid: 80166 00:21:22.393 [Child] Asynchronous Event Request test 00:21:22.393 [Child] Attached to 0000:00:10.0 00:21:22.393 [Child] Registering asynchronous event callbacks... 00:21:22.393 [Child] Getting orig temperature thresholds of all controllers 00:21:22.393 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:22.393 [Child] Waiting for all controllers to trigger AER and reset threshold 00:21:22.393 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:22.393 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:22.393 [Child] Cleaning up... 00:21:22.393 Asynchronous Event Request test 00:21:22.393 Attached to 0000:00:10.0 00:21:22.393 Reset controller to setup AER completions for this process 00:21:22.393 Registering asynchronous event callbacks... 00:21:22.393 Getting orig temperature thresholds of all controllers 00:21:22.393 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:22.393 Setting all controllers temperature threshold low to trigger AER 00:21:22.393 Waiting for all controllers temperature threshold to be set lower 00:21:22.393 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:22.393 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:22.393 Waiting for all controllers to trigger AER and reset threshold 00:21:22.393 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:22.393 Cleaning up... 00:21:22.393 00:21:22.393 real 0m0.565s 00:21:22.394 user 0m0.178s 00:21:22.394 sys 0m0.281s 00:21:22.394 07:31:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.394 07:31:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:21:22.394 ************************************ 00:21:22.394 END TEST nvme_multi_aen 00:21:22.394 ************************************ 00:21:22.394 07:31:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:21:22.394 07:31:26 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:22.394 07:31:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.394 07:31:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:22.394 ************************************ 00:21:22.394 START TEST nvme_startup 00:21:22.394 ************************************ 00:21:22.394 07:31:26 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:21:22.654 Initializing NVMe Controllers 00:21:22.654 Attached to 0000:00:10.0 00:21:22.654 Initialization complete. 00:21:22.654 Time used:183191.516 (us). 00:21:22.654 00:21:22.654 real 0m0.272s 00:21:22.654 user 0m0.092s 00:21:22.654 sys 0m0.135s 00:21:22.654 07:31:26 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.654 07:31:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:21:22.654 ************************************ 00:21:22.654 END TEST nvme_startup 00:21:22.654 ************************************ 00:21:22.914 07:31:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:21:22.914 07:31:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:22.914 07:31:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.914 07:31:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:22.914 ************************************ 00:21:22.914 START TEST nvme_multi_secondary 00:21:22.914 ************************************ 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=80222 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=80223 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:21:22.914 07:31:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:21:26.214 Initializing NVMe Controllers 00:21:26.214 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:26.214 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:21:26.214 Initialization complete. Launching workers. 00:21:26.214 ======================================================== 00:21:26.214 Latency(us) 00:21:26.214 Device Information : IOPS MiB/s Average min max 00:21:26.214 PCIE (0000:00:10.0) NSID 1 from core 1: 42512.00 166.06 376.06 129.94 1217.07 00:21:26.214 ======================================================== 00:21:26.214 Total : 42512.00 166.06 376.06 129.94 1217.07 00:21:26.214 00:21:26.214 Initializing NVMe Controllers 00:21:26.214 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:26.214 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:21:26.214 Initialization complete. Launching workers. 00:21:26.214 ======================================================== 00:21:26.214 Latency(us) 00:21:26.214 Device Information : IOPS MiB/s Average min max 00:21:26.214 PCIE (0000:00:10.0) NSID 1 from core 2: 18066.47 70.57 884.91 138.37 8672.23 00:21:26.214 ======================================================== 00:21:26.214 Total : 18066.47 70.57 884.91 138.37 8672.23 00:21:26.214 00:21:26.214 07:31:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 80222 00:21:28.748 Initializing NVMe Controllers 00:21:28.748 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:28.748 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:28.748 Initialization complete. Launching workers. 00:21:28.748 ======================================================== 00:21:28.748 Latency(us) 00:21:28.748 Device Information : IOPS MiB/s Average min max 00:21:28.748 PCIE (0000:00:10.0) NSID 1 from core 0: 48268.80 188.55 331.17 127.92 1317.48 00:21:28.748 ======================================================== 00:21:28.748 Total : 48268.80 188.55 331.17 127.92 1317.48 00:21:28.748 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 80223 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=80292 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=80293 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:21:28.748 07:31:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:21:32.058 Initializing NVMe Controllers 00:21:32.058 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:32.058 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:32.058 Initialization complete. Launching workers. 00:21:32.058 ======================================================== 00:21:32.058 Latency(us) 00:21:32.058 Device Information : IOPS MiB/s Average min max 00:21:32.058 PCIE (0000:00:10.0) NSID 1 from core 0: 41993.29 164.04 380.70 134.67 1283.08 00:21:32.058 ======================================================== 00:21:32.058 Total : 41993.29 164.04 380.70 134.67 1283.08 00:21:32.058 00:21:32.058 Initializing NVMe Controllers 00:21:32.058 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:32.058 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:21:32.058 Initialization complete. Launching workers. 00:21:32.058 ======================================================== 00:21:32.058 Latency(us) 00:21:32.058 Device Information : IOPS MiB/s Average min max 00:21:32.058 PCIE (0000:00:10.0) NSID 1 from core 1: 42298.67 165.23 377.94 133.57 1373.87 00:21:32.058 ======================================================== 00:21:32.058 Total : 42298.67 165.23 377.94 133.57 1373.87 00:21:32.058 00:21:33.965 Initializing NVMe Controllers 00:21:33.965 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:33.965 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:21:33.965 Initialization complete. Launching workers. 00:21:33.965 ======================================================== 00:21:33.965 Latency(us) 00:21:33.965 Device Information : IOPS MiB/s Average min max 00:21:33.965 PCIE (0000:00:10.0) NSID 1 from core 2: 17941.50 70.08 891.24 142.77 8907.00 00:21:33.965 ======================================================== 00:21:33.965 Total : 17941.50 70.08 891.24 142.77 8907.00 00:21:33.965 00:21:33.965 07:31:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 80292 00:21:33.965 07:31:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 80293 00:21:33.965 00:21:33.965 real 0m11.147s 00:21:33.965 user 0m18.543s 00:21:33.965 sys 0m0.964s 00:21:33.965 07:31:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.965 07:31:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:21:33.965 ************************************ 00:21:33.965 END TEST nvme_multi_secondary 00:21:33.965 ************************************ 00:21:33.965 07:31:37 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:21:33.965 07:31:37 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:21:33.965 07:31:37 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/79611 ]] 00:21:33.965 07:31:37 nvme -- common/autotest_common.sh@1094 -- # kill 79611 00:21:33.965 07:31:37 nvme -- common/autotest_common.sh@1095 -- # wait 79611 00:21:33.965 [2024-11-20 07:31:37.818001] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80165) is not found. Dropping the request. 00:21:33.965 [2024-11-20 07:31:37.818826] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80165) is not found. Dropping the request. 00:21:33.965 [2024-11-20 07:31:37.818877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80165) is not found. Dropping the request. 00:21:33.965 [2024-11-20 07:31:37.818888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80165) is not found. Dropping the request. 00:21:34.225 07:31:37 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:21:34.225 07:31:37 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:21:34.225 07:31:37 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:34.225 07:31:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.225 07:31:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.225 07:31:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:34.225 ************************************ 00:21:34.225 START TEST bdev_nvme_reset_stuck_adm_cmd 00:21:34.225 ************************************ 00:21:34.225 07:31:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:34.225 * Looking for test storage... 00:21:34.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:34.225 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.225 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.225 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.485 --rc genhtml_branch_coverage=1 00:21:34.485 --rc genhtml_function_coverage=1 00:21:34.485 --rc genhtml_legend=1 00:21:34.485 --rc geninfo_all_blocks=1 00:21:34.485 --rc geninfo_unexecuted_blocks=1 00:21:34.485 00:21:34.485 ' 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.485 --rc genhtml_branch_coverage=1 00:21:34.485 --rc genhtml_function_coverage=1 00:21:34.485 --rc genhtml_legend=1 00:21:34.485 --rc geninfo_all_blocks=1 00:21:34.485 --rc geninfo_unexecuted_blocks=1 00:21:34.485 00:21:34.485 ' 00:21:34.485 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.486 --rc genhtml_branch_coverage=1 00:21:34.486 --rc genhtml_function_coverage=1 00:21:34.486 --rc genhtml_legend=1 00:21:34.486 --rc geninfo_all_blocks=1 00:21:34.486 --rc geninfo_unexecuted_blocks=1 00:21:34.486 00:21:34.486 ' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.486 --rc genhtml_branch_coverage=1 00:21:34.486 --rc genhtml_function_coverage=1 00:21:34.486 --rc genhtml_legend=1 00:21:34.486 --rc geninfo_all_blocks=1 00:21:34.486 --rc geninfo_unexecuted_blocks=1 00:21:34.486 00:21:34.486 ' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=80442 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 80442 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 80442 ']' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.486 07:31:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:34.486 [2024-11-20 07:31:38.377314] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:21:34.486 [2024-11-20 07:31:38.377435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80442 ] 00:21:34.746 [2024-11-20 07:31:38.564359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.005 [2024-11-20 07:31:38.691653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.005 [2024-11-20 07:31:38.691863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.005 [2024-11-20 07:31:38.692049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.005 [2024-11-20 07:31:38.692108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:35.944 nvme0n1 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_dQwv3.txt 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:35.944 true 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732087899 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=80471 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:21:35.944 07:31:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:37.854 [2024-11-20 07:31:41.725888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:21:37.854 [2024-11-20 07:31:41.728089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:37.854 [2024-11-20 07:31:41.728140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:37.854 [2024-11-20 07:31:41.728157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.854 [2024-11-20 07:31:41.730605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:21:37.854 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 80471 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 80471 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 80471 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.854 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_dQwv3.txt 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_dQwv3.txt 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 80442 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 80442 ']' 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 80442 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80442 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.115 killing process with pid 80442 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80442' 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 80442 00:21:38.115 07:31:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 80442 00:21:40.654 07:31:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:21:40.654 07:31:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:21:40.654 00:21:40.654 real 0m6.142s 00:21:40.654 user 0m21.497s 00:21:40.654 sys 0m0.815s 00:21:40.654 07:31:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.654 07:31:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:40.654 ************************************ 00:21:40.654 END TEST bdev_nvme_reset_stuck_adm_cmd 00:21:40.654 ************************************ 00:21:40.654 07:31:44 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:21:40.654 07:31:44 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:21:40.654 07:31:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:40.654 07:31:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.654 07:31:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.654 ************************************ 00:21:40.654 START TEST nvme_fio 00:21:40.654 ************************************ 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:21:40.654 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:40.654 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:21:40.915 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:21:40.915 07:31:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:40.915 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:21:41.175 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:21:41.175 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:21:41.175 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:21:41.175 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:41.175 07:31:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:41.175 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:41.175 fio-3.35 00:21:41.175 Starting 1 thread 00:21:46.486 00:21:46.486 test: (groupid=0, jobs=1): err= 0: pid=80612: Wed Nov 20 07:31:49 2024 00:21:46.486 read: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec) 00:21:46.486 slat (nsec): min=4525, max=71568, avg=5329.34, stdev=1294.75 00:21:46.486 clat (usec): min=216, max=11438, avg=2939.46, stdev=430.87 00:21:46.486 lat (usec): min=221, max=11510, avg=2944.79, stdev=431.59 00:21:46.486 clat percentiles (usec): 00:21:46.486 | 1.00th=[ 2638], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:21:46.486 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:21:46.486 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3392], 00:21:46.486 | 99.00th=[ 4424], 99.50th=[ 5538], 99.90th=[ 8455], 99.95th=[ 9241], 00:21:46.486 | 99.99th=[11207] 00:21:46.486 bw ( KiB/s): min=85176, max=88616, per=99.79%, avg=86618.67, stdev=1785.82, samples=3 00:21:46.486 iops : min=21294, max=22154, avg=21654.67, stdev=446.45, samples=3 00:21:46.486 write: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec); 0 zone resets 00:21:46.486 slat (nsec): min=4563, max=60716, avg=5532.41, stdev=1289.29 00:21:46.486 clat (usec): min=336, max=11305, avg=2950.90, stdev=439.79 00:21:46.486 lat (usec): min=342, max=11317, avg=2956.43, stdev=440.47 00:21:46.486 clat percentiles (usec): 00:21:46.486 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:21:46.486 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:21:46.486 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3392], 00:21:46.486 | 99.00th=[ 4686], 99.50th=[ 5800], 99.90th=[ 8455], 99.95th=[ 9372], 00:21:46.486 | 99.99th=[10814] 00:21:46.486 bw ( KiB/s): min=85720, max=88640, per=100.00%, avg=86818.67, stdev=1588.49, samples=3 00:21:46.486 iops : min=21430, max=22160, avg=21704.67, stdev=397.12, samples=3 00:21:46.486 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:21:46.486 lat (msec) : 2=0.05%, 4=98.72%, 10=1.16%, 20=0.03% 00:21:46.486 cpu : usr=99.85%, sys=0.10%, ctx=17, majf=0, minf=609 00:21:46.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:46.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:46.486 issued rwts: total=43423,43102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:46.486 00:21:46.486 Run status group 0 (all jobs): 00:21:46.486 READ: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:21:46.486 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (177MB), run=2001-2001msec 00:21:46.486 ----------------------------------------------------- 00:21:46.486 Suppressions used: 00:21:46.486 count bytes template 00:21:46.486 1 32 /usr/src/fio/parse.c 00:21:46.486 ----------------------------------------------------- 00:21:46.486 00:21:46.486 07:31:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:21:46.487 07:31:50 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:21:46.487 00:21:46.487 real 0m5.902s 00:21:46.487 user 0m4.094s 00:21:46.487 sys 0m3.006s 00:21:46.487 07:31:50 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.487 07:31:50 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:21:46.487 ************************************ 00:21:46.487 END TEST nvme_fio 00:21:46.487 ************************************ 00:21:46.487 00:21:46.487 real 0m48.713s 00:21:46.487 user 2m7.821s 00:21:46.487 sys 0m11.102s 00:21:46.487 07:31:50 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.487 07:31:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:46.487 ************************************ 00:21:46.487 END TEST nvme 00:21:46.487 ************************************ 00:21:46.487 07:31:50 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:21:46.487 07:31:50 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:21:46.487 07:31:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.487 07:31:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.487 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:21:46.487 ************************************ 00:21:46.487 START TEST nvme_scc 00:21:46.487 ************************************ 00:21:46.487 07:31:50 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:21:46.487 * Looking for test storage... 00:21:46.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:46.487 07:31:50 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.487 07:31:50 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.487 07:31:50 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.487 07:31:50 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@345 -- # : 1 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.487 07:31:50 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@368 -- # return 0 00:21:46.747 07:31:50 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.747 07:31:50 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.747 --rc genhtml_branch_coverage=1 00:21:46.747 --rc genhtml_function_coverage=1 00:21:46.747 --rc genhtml_legend=1 00:21:46.747 --rc geninfo_all_blocks=1 00:21:46.747 --rc geninfo_unexecuted_blocks=1 00:21:46.747 00:21:46.747 ' 00:21:46.747 07:31:50 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.747 --rc genhtml_branch_coverage=1 00:21:46.747 --rc genhtml_function_coverage=1 00:21:46.747 --rc genhtml_legend=1 00:21:46.747 --rc geninfo_all_blocks=1 00:21:46.747 --rc geninfo_unexecuted_blocks=1 00:21:46.747 00:21:46.747 ' 00:21:46.747 07:31:50 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.747 --rc genhtml_branch_coverage=1 00:21:46.747 --rc genhtml_function_coverage=1 00:21:46.747 --rc genhtml_legend=1 00:21:46.747 --rc geninfo_all_blocks=1 00:21:46.747 --rc geninfo_unexecuted_blocks=1 00:21:46.747 00:21:46.747 ' 00:21:46.747 07:31:50 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.747 --rc genhtml_branch_coverage=1 00:21:46.747 --rc genhtml_function_coverage=1 00:21:46.747 --rc genhtml_legend=1 00:21:46.747 --rc geninfo_all_blocks=1 00:21:46.747 --rc geninfo_unexecuted_blocks=1 00:21:46.747 00:21:46.747 ' 00:21:46.747 07:31:50 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.747 07:31:50 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@6 -- # export PATH 00:21:46.747 07:31:50 nvme_scc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:21:46.747 07:31:50 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:21:46.747 07:31:50 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.747 07:31:50 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:21:46.747 07:31:50 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:21:46.747 07:31:50 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:21:46.747 07:31:50 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:47.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:21:47.007 Waiting for block devices as requested 00:21:47.269 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:47.269 07:31:51 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:21:47.269 07:31:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:21:47.269 07:31:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:47.269 07:31:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:47.269 07:31:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:21:47.269 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.270 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:21:47.271 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.272 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:21:47.273 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:21:47.274 07:31:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:21:47.274 07:31:51 nvme_scc -- nvme/functions.sh@192 -- # (( 1 == 0 )) 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@208 -- # echo nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:21:47.275 07:31:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:21:47.275 07:31:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:21:47.275 07:31:51 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:47.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:21:47.845 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:48.785 07:31:52 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:21:48.785 07:31:52 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:48.785 07:31:52 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.785 07:31:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:21:48.785 ************************************ 00:21:48.785 START TEST nvme_simple_copy 00:21:48.785 ************************************ 00:21:48.785 07:31:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:21:49.044 Initializing NVMe Controllers 00:21:49.044 Attaching to 0000:00:10.0 00:21:49.044 Controller supports SCC. Attached to 0000:00:10.0 00:21:49.044 Namespace ID: 1 size: 5GB 00:21:49.044 Initialization complete. 00:21:49.044 00:21:49.044 Controller QEMU NVMe Ctrl (12340 ) 00:21:49.044 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:21:49.044 Namespace Block Size:4096 00:21:49.044 Writing LBAs 0 to 63 with Random Data 00:21:49.044 Copied LBAs from 0 - 63 to the Destination LBA 256 00:21:49.044 LBAs matching Written Data: 64 00:21:49.044 00:21:49.044 real 0m0.293s 00:21:49.044 user 0m0.105s 00:21:49.044 sys 0m0.089s 00:21:49.044 07:31:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.044 07:31:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:21:49.044 ************************************ 00:21:49.044 END TEST nvme_simple_copy 00:21:49.044 ************************************ 00:21:49.044 00:21:49.044 real 0m2.609s 00:21:49.044 user 0m0.664s 00:21:49.044 sys 0m1.906s 00:21:49.044 07:31:52 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.044 07:31:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:21:49.044 ************************************ 00:21:49.044 END TEST nvme_scc 00:21:49.044 ************************************ 00:21:49.044 07:31:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:21:49.045 07:31:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:21:49.045 07:31:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:21:49.045 07:31:52 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:21:49.045 07:31:52 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:21:49.045 07:31:52 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:21:49.045 07:31:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:49.045 07:31:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.045 07:31:52 -- common/autotest_common.sh@10 -- # set +x 00:21:49.045 ************************************ 00:21:49.045 START TEST nvme_rpc 00:21:49.045 ************************************ 00:21:49.045 07:31:52 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:21:49.305 * Looking for test storage... 00:21:49.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.305 07:31:53 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:49.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.305 --rc genhtml_branch_coverage=1 00:21:49.305 --rc genhtml_function_coverage=1 00:21:49.305 --rc genhtml_legend=1 00:21:49.305 --rc geninfo_all_blocks=1 00:21:49.305 --rc geninfo_unexecuted_blocks=1 00:21:49.305 00:21:49.305 ' 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:49.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.305 --rc genhtml_branch_coverage=1 00:21:49.305 --rc genhtml_function_coverage=1 00:21:49.305 --rc genhtml_legend=1 00:21:49.305 --rc geninfo_all_blocks=1 00:21:49.305 --rc geninfo_unexecuted_blocks=1 00:21:49.305 00:21:49.305 ' 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:49.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.305 --rc genhtml_branch_coverage=1 00:21:49.305 --rc genhtml_function_coverage=1 00:21:49.305 --rc genhtml_legend=1 00:21:49.305 --rc geninfo_all_blocks=1 00:21:49.305 --rc geninfo_unexecuted_blocks=1 00:21:49.305 00:21:49.305 ' 00:21:49.305 07:31:53 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:49.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.306 --rc genhtml_branch_coverage=1 00:21:49.306 --rc genhtml_function_coverage=1 00:21:49.306 --rc genhtml_legend=1 00:21:49.306 --rc geninfo_all_blocks=1 00:21:49.306 --rc geninfo_unexecuted_blocks=1 00:21:49.306 00:21:49.306 ' 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=81084 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:21:49.306 07:31:53 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 81084 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81084 ']' 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.306 07:31:53 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.566 [2024-11-20 07:31:53.258420] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:21:49.566 [2024-11-20 07:31:53.258529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81084 ] 00:21:49.566 [2024-11-20 07:31:53.418307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:49.825 [2024-11-20 07:31:53.539048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.825 [2024-11-20 07:31:53.539089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.769 07:31:54 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.769 07:31:54 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:50.769 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:21:50.769 Nvme0n1 00:21:51.035 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:21:51.035 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:21:51.035 request: 00:21:51.035 { 00:21:51.035 "bdev_name": "Nvme0n1", 00:21:51.035 "filename": "non_existing_file", 00:21:51.035 "method": "bdev_nvme_apply_firmware", 00:21:51.035 "req_id": 1 00:21:51.035 } 00:21:51.035 Got JSON-RPC error response 00:21:51.035 response: 00:21:51.035 { 00:21:51.035 "code": -32603, 00:21:51.035 "message": "open file failed." 00:21:51.035 } 00:21:51.035 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:21:51.035 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:21:51.035 07:31:54 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:51.294 07:31:55 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:51.294 07:31:55 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 81084 00:21:51.294 07:31:55 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81084 ']' 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81084 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81084 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.295 killing process with pid 81084 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81084' 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@973 -- # kill 81084 00:21:51.295 07:31:55 nvme_rpc -- common/autotest_common.sh@978 -- # wait 81084 00:21:53.835 00:21:53.835 real 0m4.246s 00:21:53.835 user 0m7.729s 00:21:53.835 sys 0m0.741s 00:21:53.835 07:31:57 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.835 07:31:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:53.835 ************************************ 00:21:53.835 END TEST nvme_rpc 00:21:53.835 ************************************ 00:21:53.835 07:31:57 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:21:53.835 07:31:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.835 07:31:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.835 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:53.835 ************************************ 00:21:53.835 START TEST nvme_rpc_timeouts 00:21:53.835 ************************************ 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:21:53.835 * Looking for test storage... 00:21:53.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.835 07:31:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.835 --rc genhtml_branch_coverage=1 00:21:53.835 --rc genhtml_function_coverage=1 00:21:53.835 --rc genhtml_legend=1 00:21:53.835 --rc geninfo_all_blocks=1 00:21:53.835 --rc geninfo_unexecuted_blocks=1 00:21:53.835 00:21:53.835 ' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.835 --rc genhtml_branch_coverage=1 00:21:53.835 --rc genhtml_function_coverage=1 00:21:53.835 --rc genhtml_legend=1 00:21:53.835 --rc geninfo_all_blocks=1 00:21:53.835 --rc geninfo_unexecuted_blocks=1 00:21:53.835 00:21:53.835 ' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.835 --rc genhtml_branch_coverage=1 00:21:53.835 --rc genhtml_function_coverage=1 00:21:53.835 --rc genhtml_legend=1 00:21:53.835 --rc geninfo_all_blocks=1 00:21:53.835 --rc geninfo_unexecuted_blocks=1 00:21:53.835 00:21:53.835 ' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.835 --rc genhtml_branch_coverage=1 00:21:53.835 --rc genhtml_function_coverage=1 00:21:53.835 --rc genhtml_legend=1 00:21:53.835 --rc geninfo_all_blocks=1 00:21:53.835 --rc geninfo_unexecuted_blocks=1 00:21:53.835 00:21:53.835 ' 00:21:53.835 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.835 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_81153 00:21:53.835 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_81153 00:21:53.836 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=81196 00:21:53.836 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:21:53.836 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:21:53.836 07:31:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 81196 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 81196 ']' 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.836 07:31:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:53.836 [2024-11-20 07:31:57.489251] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:21:53.836 [2024-11-20 07:31:57.489397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81196 ] 00:21:53.836 [2024-11-20 07:31:57.663198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:54.121 [2024-11-20 07:31:57.788767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.121 [2024-11-20 07:31:57.788808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.059 07:31:58 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.059 07:31:58 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:21:55.059 Checking default timeout settings: 00:21:55.059 07:31:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:21:55.059 07:31:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:55.319 Making settings changes with rpc: 00:21:55.319 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:21:55.319 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:21:55.319 Check default vs. modified settings: 00:21:55.319 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:21:55.319 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:21:55.888 Setting action_on_timeout is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:21:55.888 Setting timeout_us is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:21:55.888 Setting timeout_admin_us is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_81153 /tmp/settings_modified_81153 00:21:55.888 07:31:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 81196 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 81196 ']' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 81196 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81196 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.888 killing process with pid 81196 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81196' 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 81196 00:21:55.888 07:31:59 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 81196 00:21:58.427 RPC TIMEOUT SETTING TEST PASSED. 00:21:58.427 07:32:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:21:58.427 00:21:58.427 real 0m4.859s 00:21:58.427 user 0m9.097s 00:21:58.427 sys 0m0.806s 00:21:58.427 07:32:02 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.427 07:32:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:58.427 ************************************ 00:21:58.427 END TEST nvme_rpc_timeouts 00:21:58.427 ************************************ 00:21:58.427 07:32:02 -- spdk/autotest.sh@239 -- # uname -s 00:21:58.427 07:32:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:21:58.427 07:32:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:58.427 07:32:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:58.427 07:32:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.427 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:21:58.427 ************************************ 00:21:58.427 START TEST sw_hotplug 00:21:58.427 ************************************ 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:58.427 * Looking for test storage... 00:21:58.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.427 07:32:02 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:21:58.427 07:32:02 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.686 07:32:02 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:58.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.686 --rc genhtml_branch_coverage=1 00:21:58.686 --rc genhtml_function_coverage=1 00:21:58.686 --rc genhtml_legend=1 00:21:58.686 --rc geninfo_all_blocks=1 00:21:58.686 --rc geninfo_unexecuted_blocks=1 00:21:58.686 00:21:58.686 ' 00:21:58.686 07:32:02 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:58.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.686 --rc genhtml_branch_coverage=1 00:21:58.686 --rc genhtml_function_coverage=1 00:21:58.686 --rc genhtml_legend=1 00:21:58.686 --rc geninfo_all_blocks=1 00:21:58.686 --rc geninfo_unexecuted_blocks=1 00:21:58.686 00:21:58.686 ' 00:21:58.686 07:32:02 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:58.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.686 --rc genhtml_branch_coverage=1 00:21:58.686 --rc genhtml_function_coverage=1 00:21:58.686 --rc genhtml_legend=1 00:21:58.686 --rc geninfo_all_blocks=1 00:21:58.686 --rc geninfo_unexecuted_blocks=1 00:21:58.686 00:21:58.686 ' 00:21:58.686 07:32:02 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:58.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.686 --rc genhtml_branch_coverage=1 00:21:58.686 --rc genhtml_function_coverage=1 00:21:58.686 --rc genhtml_legend=1 00:21:58.686 --rc geninfo_all_blocks=1 00:21:58.686 --rc geninfo_unexecuted_blocks=1 00:21:58.686 00:21:58.686 ' 00:21:58.686 07:32:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:58.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:21:58.945 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@233 -- # local class 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@328 -- # (( 1 )) 00:21:59.882 07:32:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:21:59.882 07:32:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:00.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:00.451 Waiting for block devices as requested 00:22:00.451 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.451 07:32:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:22:00.452 07:32:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:00.711 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:22:00.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:00.971 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=81731 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:01.909 07:32:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:01.909 07:32:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:02.170 Initializing NVMe Controllers 00:22:02.170 Attaching to 0000:00:10.0 00:22:02.170 Attached to 0000:00:10.0 00:22:02.170 Initialization complete. Starting I/O... 00:22:02.170 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:22:02.170 00:22:03.109 QEMU NVMe Ctrl (12340 ): 2756 I/Os completed (+2756) 00:22:03.109 00:22:04.490 QEMU NVMe Ctrl (12340 ): 6448 I/Os completed (+3692) 00:22:04.490 00:22:05.429 QEMU NVMe Ctrl (12340 ): 10139 I/Os completed (+3691) 00:22:05.429 00:22:06.368 QEMU NVMe Ctrl (12340 ): 13723 I/Os completed (+3584) 00:22:06.368 00:22:07.306 QEMU NVMe Ctrl (12340 ): 17477 I/Os completed (+3754) 00:22:07.306 00:22:07.893 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:07.893 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:07.893 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:07.893 [2024-11-20 07:32:11.776374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:07.893 Controller removed: QEMU NVMe Ctrl (12340 ) 00:22:07.893 [2024-11-20 07:32:11.778134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.778200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.778227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.778253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:22:07.893 [2024-11-20 07:32:11.784257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.784327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.784360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 [2024-11-20 07:32:11.784382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.893 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:22:07.893 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:08.153 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:08.153 07:32:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:08.153 Attaching to 0000:00:10.0 00:22:08.153 Attached to 0000:00:10.0 00:22:09.093 QEMU NVMe Ctrl (12340 ): 3398 I/Os completed (+3398) 00:22:09.093 00:22:10.474 QEMU NVMe Ctrl (12340 ): 6567 I/Os completed (+3169) 00:22:10.474 00:22:11.414 QEMU NVMe Ctrl (12340 ): 10067 I/Os completed (+3500) 00:22:11.414 00:22:12.354 QEMU NVMe Ctrl (12340 ): 13303 I/Os completed (+3236) 00:22:12.354 00:22:13.295 QEMU NVMe Ctrl (12340 ): 16883 I/Os completed (+3580) 00:22:13.295 00:22:14.237 QEMU NVMe Ctrl (12340 ): 20399 I/Os completed (+3516) 00:22:14.237 00:22:14.237 07:32:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:22:14.237 07:32:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:14.237 07:32:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:14.237 07:32:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:14.237 [2024-11-20 07:32:17.989911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:14.237 Controller removed: QEMU NVMe Ctrl (12340 ) 00:22:14.237 [2024-11-20 07:32:17.991058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.991117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.991146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.991165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:22:14.237 [2024-11-20 07:32:17.998028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.998076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.998096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 [2024-11-20 07:32:17.998119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:14.237 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:22:14.237 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:14.237 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:14.237 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:14.237 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:14.497 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:14.497 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:14.497 07:32:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:14.497 Attaching to 0000:00:10.0 00:22:14.497 Attached to 0000:00:10.0 00:22:15.066 QEMU NVMe Ctrl (12340 ): 2816 I/Os completed (+2816) 00:22:15.066 00:22:16.454 QEMU NVMe Ctrl (12340 ): 6492 I/Os completed (+3676) 00:22:16.454 00:22:17.395 QEMU NVMe Ctrl (12340 ): 10128 I/Os completed (+3636) 00:22:17.395 00:22:18.334 QEMU NVMe Ctrl (12340 ): 13776 I/Os completed (+3648) 00:22:18.334 00:22:19.273 QEMU NVMe Ctrl (12340 ): 17396 I/Os completed (+3620) 00:22:19.273 00:22:20.212 QEMU NVMe Ctrl (12340 ): 21108 I/Os completed (+3712) 00:22:20.212 00:22:20.472 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:22:20.472 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:20.472 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:20.472 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:20.472 [2024-11-20 07:32:24.192958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:20.472 Controller removed: QEMU NVMe Ctrl (12340 ) 00:22:20.472 [2024-11-20 07:32:24.194036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.472 [2024-11-20 07:32:24.194093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 [2024-11-20 07:32:24.194114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 [2024-11-20 07:32:24.194136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:22:20.473 [2024-11-20 07:32:24.200350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 [2024-11-20 07:32:24.200404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 [2024-11-20 07:32:24.200421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 [2024-11-20 07:32:24.200439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:20.473 07:32:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:20.473 Attaching to 0000:00:10.0 00:22:20.473 Attached to 0000:00:10.0 00:22:20.473 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:22:20.732 [2024-11-20 07:32:24.394644] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:22:27.308 07:32:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:22:27.308 07:32:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:27.308 07:32:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=24.62 00:22:27.308 07:32:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 24.62 00:22:27.308 07:32:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:22:27.308 07:32:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.62 00:22:27.308 07:32:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.62 1 00:22:27.308 remove_attach_helper took 24.62s to complete (handling 1 nvme drive(s)) 07:32:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 81731 00:22:32.590 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (81731) - No such process 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 81731 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=82071 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:22:32.590 07:32:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 82071 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 82071 ']' 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.590 07:32:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:32.590 [2024-11-20 07:32:36.471919] Starting SPDK v25.01-pre git sha1 4c583db59 / DPDK 24.03.0 initialization... 00:22:32.590 [2024-11-20 07:32:36.472040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82071 ] 00:22:32.850 [2024-11-20 07:32:36.646941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.110 [2024-11-20 07:32:36.772220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:34.049 07:32:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:34.049 07:32:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:40.629 07:32:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.629 07:32:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 [2024-11-20 07:32:43.769752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:40.629 07:32:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.629 [2024-11-20 07:32:43.771351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:40.629 [2024-11-20 07:32:43.771391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.629 [2024-11-20 07:32:43.771406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.629 [2024-11-20 07:32:43.771431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:40.629 [2024-11-20 07:32:43.771442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.629 [2024-11-20 07:32:43.771453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.629 [2024-11-20 07:32:43.771463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:40.629 [2024-11-20 07:32:43.771473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.629 [2024-11-20 07:32:43.771492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.629 [2024-11-20 07:32:43.771505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:40.629 [2024-11-20 07:32:43.771514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.629 [2024-11-20 07:32:43.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:40.629 07:32:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:40.629 07:32:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.629 07:32:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 07:32:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:40.629 07:32:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:47.205 07:32:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.205 07:32:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:47.205 07:32:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.205 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:47.206 07:32:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.206 07:32:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:47.206 [2024-11-20 07:32:50.556790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:47.206 [2024-11-20 07:32:50.558577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:47.206 [2024-11-20 07:32:50.558618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.206 [2024-11-20 07:32:50.558633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.206 [2024-11-20 07:32:50.558652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:47.206 [2024-11-20 07:32:50.558664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.206 [2024-11-20 07:32:50.558673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.206 [2024-11-20 07:32:50.558747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:47.206 [2024-11-20 07:32:50.558758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.206 [2024-11-20 07:32:50.558768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.206 [2024-11-20 07:32:50.558778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:47.206 [2024-11-20 07:32:50.558789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.206 [2024-11-20 07:32:50.558797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.206 07:32:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:47.206 07:32:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:53.779 07:32:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:53.779 07:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:53.779 [2024-11-20 07:32:56.844817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:53.779 [2024-11-20 07:32:56.846512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:53.779 [2024-11-20 07:32:56.846560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.779 [2024-11-20 07:32:56.846574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.779 [2024-11-20 07:32:56.846594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:53.779 [2024-11-20 07:32:56.846604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.779 [2024-11-20 07:32:56.846615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.779 [2024-11-20 07:32:56.846626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:53.779 [2024-11-20 07:32:56.846637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.779 [2024-11-20 07:32:56.846645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.779 [2024-11-20 07:32:56.846656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:53.779 [2024-11-20 07:32:56.846665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.779 [2024-11-20 07:32:56.846675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:53.779 07:32:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.779 07:32:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:53.779 07:32:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:53.779 07:32:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=25.86 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 25.86 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.86 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.86 1 00:23:00.389 remove_attach_helper took 25.86s to complete (handling 1 nvme drive(s)) 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:23:00.389 07:33:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:23:00.389 07:33:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:06.962 07:33:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.962 07:33:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:06.962 [2024-11-20 07:33:09.667975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:06.962 [2024-11-20 07:33:09.669638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:06.962 [2024-11-20 07:33:09.669732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.962 [2024-11-20 07:33:09.669784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.962 [2024-11-20 07:33:09.669824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:06.962 [2024-11-20 07:33:09.669848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.962 [2024-11-20 07:33:09.669936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.962 [2024-11-20 07:33:09.670042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:06.962 [2024-11-20 07:33:09.670075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.962 [2024-11-20 07:33:09.670135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.962 [2024-11-20 07:33:09.670182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:06.962 [2024-11-20 07:33:09.670221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.962 [2024-11-20 07:33:09.670232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.962 07:33:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:06.962 07:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:06.962 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:06.963 07:33:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.963 07:33:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:06.963 07:33:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:06.963 07:33:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:13.534 [2024-11-20 07:33:16.455055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:13.534 [2024-11-20 07:33:16.457071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:13.534 [2024-11-20 07:33:16.457164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.534 [2024-11-20 07:33:16.457230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.534 [2024-11-20 07:33:16.457318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:13.534 [2024-11-20 07:33:16.457371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.534 [2024-11-20 07:33:16.457453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.534 [2024-11-20 07:33:16.457527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:13.534 [2024-11-20 07:33:16.457579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.534 [2024-11-20 07:33:16.457650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.534 [2024-11-20 07:33:16.457750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:13.534 [2024-11-20 07:33:16.457802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.534 [2024-11-20 07:33:16.457867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:13.534 07:33:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:13.534 07:33:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:13.534 07:33:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.115 [2024-11-20 07:33:23.242079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:20.115 [2024-11-20 07:33:23.243720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.115 [2024-11-20 07:33:23.243792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.115 [2024-11-20 07:33:23.243838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:33:23.243877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.115 [2024-11-20 07:33:23.243927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.115 [2024-11-20 07:33:23.243993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:33:23.244040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.115 [2024-11-20 07:33:23.244074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.115 [2024-11-20 07:33:23.244151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:33:23.244198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.115 [2024-11-20 07:33:23.244236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.115 [2024-11-20 07:33:23.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.115 07:33:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:20.115 07:33:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:26.694 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=26.39 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 26.39 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.39 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.39 1 00:23:26.695 remove_attach_helper took 26.39s to complete (handling 1 nvme drive(s)) 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:26.695 07:33:29 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 82071 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 82071 ']' 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 82071 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.695 07:33:29 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82071 00:23:26.695 killing process with pid 82071 00:23:26.695 07:33:30 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.695 07:33:30 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.695 07:33:30 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82071' 00:23:26.695 07:33:30 sw_hotplug -- common/autotest_common.sh@973 -- # kill 82071 00:23:26.695 07:33:30 sw_hotplug -- common/autotest_common.sh@978 -- # wait 82071 00:23:28.605 07:33:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:28.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:28.864 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.803 00:23:29.803 real 1m31.327s 00:23:29.803 user 1m6.872s 00:23:29.803 sys 0m14.715s 00:23:29.803 ************************************ 00:23:29.804 END TEST sw_hotplug 00:23:29.804 ************************************ 00:23:29.804 07:33:33 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.804 07:33:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:29.804 07:33:33 -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:29.804 07:33:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.804 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:29.804 07:33:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:29.804 07:33:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:29.804 07:33:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:29.804 07:33:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:29.804 07:33:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.804 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:29.804 07:33:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:29.804 07:33:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:29.804 07:33:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:29.804 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:32.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:32.405 Waiting for block devices as requested 00:23:32.405 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:32.665 Cleaning 00:23:32.665 Removing: /var/run/dpdk/spdk0/config 00:23:32.665 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:32.665 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:32.665 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:32.665 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:32.665 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:32.665 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:32.665 Removing: /dev/shm/spdk_tgt_trace.pid66531 00:23:32.665 Removing: /var/run/dpdk/spdk0 00:23:32.926 Removing: /var/run/dpdk/spdk_pid66268 00:23:32.926 Removing: /var/run/dpdk/spdk_pid66531 00:23:32.926 Removing: /var/run/dpdk/spdk_pid66760 00:23:32.926 Removing: /var/run/dpdk/spdk_pid66875 00:23:32.926 Removing: /var/run/dpdk/spdk_pid66931 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67065 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67092 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67253 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67521 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67715 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67840 00:23:32.926 Removing: /var/run/dpdk/spdk_pid67951 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68079 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68191 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68232 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68274 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68345 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68467 00:23:32.926 Removing: /var/run/dpdk/spdk_pid68974 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69049 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69123 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69145 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69293 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69315 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69474 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69490 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69560 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69583 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69646 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69665 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69861 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69897 00:23:32.926 Removing: /var/run/dpdk/spdk_pid69935 00:23:32.926 Removing: /var/run/dpdk/spdk_pid70018 00:23:32.926 Removing: /var/run/dpdk/spdk_pid70205 00:23:32.926 Removing: /var/run/dpdk/spdk_pid70295 00:23:32.926 Removing: /var/run/dpdk/spdk_pid70358 00:23:32.926 Removing: /var/run/dpdk/spdk_pid71537 00:23:32.926 Removing: /var/run/dpdk/spdk_pid71765 00:23:32.926 Removing: /var/run/dpdk/spdk_pid71967 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72084 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72215 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72285 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72316 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72347 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72771 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72854 00:23:32.926 Removing: /var/run/dpdk/spdk_pid72966 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73024 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73177 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73234 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73286 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73351 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73503 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73647 00:23:32.926 Removing: /var/run/dpdk/spdk_pid73881 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74166 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74181 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74222 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74252 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74283 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74313 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74338 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74369 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74399 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74423 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74449 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74490 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74519 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74550 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74580 00:23:32.926 Removing: /var/run/dpdk/spdk_pid74606 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74637 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74667 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74692 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74723 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74764 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74793 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74834 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74917 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74961 00:23:33.186 Removing: /var/run/dpdk/spdk_pid74987 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75029 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75056 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75081 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75128 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75157 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75201 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75225 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75246 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75271 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75296 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75316 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75335 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75360 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75399 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75443 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75470 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75516 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75543 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75563 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75621 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75650 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75694 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75714 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75739 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75764 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75780 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75803 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75828 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75852 00:23:33.186 Removing: /var/run/dpdk/spdk_pid75943 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76040 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76211 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76237 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76282 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76332 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76370 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76402 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76434 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76481 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76512 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76601 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76660 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76709 00:23:33.186 Removing: /var/run/dpdk/spdk_pid76961 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77075 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77120 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77155 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77202 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77253 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77299 00:23:33.186 Removing: /var/run/dpdk/spdk_pid77343 00:23:33.187 Removing: /var/run/dpdk/spdk_pid77459 00:23:33.187 Removing: /var/run/dpdk/spdk_pid77539 00:23:33.187 Removing: /var/run/dpdk/spdk_pid77580 00:23:33.187 Removing: /var/run/dpdk/spdk_pid77811 00:23:33.187 Removing: /var/run/dpdk/spdk_pid77915 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78013 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78061 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78092 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78180 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78586 00:23:33.187 Removing: /var/run/dpdk/spdk_pid78628 00:23:33.446 Removing: /var/run/dpdk/spdk_pid78914 00:23:33.446 Removing: /var/run/dpdk/spdk_pid79011 00:23:33.446 Removing: /var/run/dpdk/spdk_pid79116 00:23:33.446 Removing: /var/run/dpdk/spdk_pid79169 00:23:33.447 Removing: /var/run/dpdk/spdk_pid79195 00:23:33.447 Removing: /var/run/dpdk/spdk_pid79226 00:23:33.447 Removing: /var/run/dpdk/spdk_pid80442 00:23:33.447 Removing: /var/run/dpdk/spdk_pid80576 00:23:33.447 Removing: /var/run/dpdk/spdk_pid80581 00:23:33.447 Removing: /var/run/dpdk/spdk_pid80608 00:23:33.447 Removing: /var/run/dpdk/spdk_pid81084 00:23:33.447 Removing: /var/run/dpdk/spdk_pid81196 00:23:33.447 Removing: /var/run/dpdk/spdk_pid82071 00:23:33.447 Clean 00:23:33.447 07:33:37 -- common/autotest_common.sh@1453 -- # return 0 00:23:33.447 07:33:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:33.447 07:33:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.447 07:33:37 -- common/autotest_common.sh@10 -- # set +x 00:23:33.447 07:33:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:33.447 07:33:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.447 07:33:37 -- common/autotest_common.sh@10 -- # set +x 00:23:33.447 07:33:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:33.447 07:33:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:33.447 07:33:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:33.447 07:33:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:33.447 07:33:37 -- spdk/autotest.sh@398 -- # hostname 00:23:33.447 07:33:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:33.706 geninfo: WARNING: invalid characters removed from testname! 00:24:29.952 07:34:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:34.147 07:34:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:37.435 07:34:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:39.975 07:34:43 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:43.273 07:34:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:45.809 07:34:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:48.346 07:34:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:48.346 07:34:51 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:48.346 07:34:51 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:48.346 07:34:51 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:48.346 07:34:51 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:48.346 07:34:51 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:48.346 + [[ -n 2402 ]] 00:24:48.346 + sudo kill 2402 00:24:48.356 [Pipeline] } 00:24:48.374 [Pipeline] // timeout 00:24:48.381 [Pipeline] } 00:24:48.398 [Pipeline] // stage 00:24:48.404 [Pipeline] } 00:24:48.422 [Pipeline] // catchError 00:24:48.433 [Pipeline] stage 00:24:48.436 [Pipeline] { (Stop VM) 00:24:48.451 [Pipeline] sh 00:24:48.737 + vagrant halt 00:24:51.301 ==> default: Halting domain... 00:24:57.949 [Pipeline] sh 00:24:58.232 + vagrant destroy -f 00:25:00.770 ==> default: Removing domain... 00:25:01.352 [Pipeline] sh 00:25:01.635 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:25:01.645 [Pipeline] } 00:25:01.660 [Pipeline] // stage 00:25:01.666 [Pipeline] } 00:25:01.680 [Pipeline] // dir 00:25:01.685 [Pipeline] } 00:25:01.700 [Pipeline] // wrap 00:25:01.706 [Pipeline] } 00:25:01.720 [Pipeline] // catchError 00:25:01.730 [Pipeline] stage 00:25:01.732 [Pipeline] { (Epilogue) 00:25:01.745 [Pipeline] sh 00:25:02.027 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:20.139 [Pipeline] catchError 00:25:20.141 [Pipeline] { 00:25:20.154 [Pipeline] sh 00:25:20.437 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:20.437 Artifacts sizes are good 00:25:20.447 [Pipeline] } 00:25:20.465 [Pipeline] // catchError 00:25:20.476 [Pipeline] archiveArtifacts 00:25:20.483 Archiving artifacts 00:25:20.769 [Pipeline] cleanWs 00:25:20.779 [WS-CLEANUP] Deleting project workspace... 00:25:20.779 [WS-CLEANUP] Deferred wipeout is used... 00:25:20.786 [WS-CLEANUP] done 00:25:20.787 [Pipeline] } 00:25:20.801 [Pipeline] // stage 00:25:20.806 [Pipeline] } 00:25:20.820 [Pipeline] // node 00:25:20.825 [Pipeline] End of Pipeline 00:25:20.860 Finished: SUCCESS